Serverless Computing Applications and Architectures
by Chandan Gaur | 21 March 2019
What is Serverless Computing?
Serverless computing kind of cloud computing, that is definitely not server-less, there are servers, but the point is that the service providers or the consumers of the cloud computing do not have to worry about planning or to manage the server and its properties or we can say as infrastructure, cloud service providers will do that, and the developers of the service have to provide functionality to the providers to run either on request or on event. It is offering them the actual business logic and the cloud provider will handle all the server management and run our business logic on-demand. Understand like this, suppose if a trained, national player of a game has to play in the team of all freshers, so he has to maintain his game, with keeping in mind of all the others, he has to change positions of all the others, he has to keep an eye on every player, to improve them, and a part of developing himself, that was the old version of cloud computing was like, the developer of the service was the player, who has to take care of all the infrastructure, and the business model. While the new, the serverless computing is like, the situation that player has, when either his teammates are professional players, or there is a trainer for those how are not that good, so, if this is the condition, and although he still has to look for his give to team, except that, he just has to concentrate on his play, his skills, he doesn’t have to worry about other teammates, how they play and all. Same in serverless computing, where again the player is developer, and the teammates are infrastructure, but in the above example and this one, the difference is that here we include either the experience of teammates or an experienced coach, which is referred to the cloud providers, i.e., the developers have not to too much worry about the infrastructure, providers handle the foundation, although developer has to be a little bit informed about the working of serverless, what it can do with it and some bits more, except this developers has to worry just about the business model or logic.
Serverless computing can also be recognized as FaaS (Function-as-a-Service) architecture of cloud computing, where the developers have to set a function or two, for an event. And consumers have not to worry about the resources, because resources in this approach come to life when the function starts, and get released when the function ends, and also they are available to the service itself. Because only some functions will run on an event, the time of the service will be short so we can say cloud providers, provides ephemeral computing services. That means the consumer of the cloud would be charged according to the Execution time of the provided function, per millisecond, and the times of execution of the code. If the client is up, and the functions are in use, the consumer will be charged; if there are no clients, and the functions are not in use, there are no events, the consumer will not be charged. Serverless computing makes scalability of the server to infinity, i.e., there will be no load management, it has to run code or function on request or event, if there will be parallel requests, there will be function running for each particular application.
Server scaling, planning or any other kind of management will not be the work of consumer of the cloud.
Just provide the logic in a single function, and set the event or request, and the work is done.
Cloud providers also provide the logging and monitoring system for the consumers.
Because of the less work, Deployment time will be short.
Server management is the work of cloud providers.
Functions are executed on either request or event.
Monitoring and logging is the work of providers.
Kind of completely automatic for cloud consumers.
More the load is, more the requests are, more the functions running.
You are automatically scaling, kind of inelastic fashion.
As it states, consumers of the cloud have to pay just for the time they use, which can be evaluated on the basis of the execution time of the function they provide any number of times their purpose runs.
No charges, if there is no use.
Comparison with Traditional Architecture
With Trusted Platform Module
If we look back in time, we’ll see that working with serverless has many similarities with how developers team use to work with mainframes. There was a technology name as TPM(Transaction Process Monitoring), for example, IBM’s CICS(Customer Information Control System), and all it did was to handle non-functional elements. TPM handles the issues of load management, security, monitoring, deployment, etc. So, TPM can be called very similar to Serverless.
With IaaS (Infrastructure as a Service)
IaaS offers complete infrastructure, as demanded, and developers have to install a virtual machine first and then deploy the app. IaaS is done with Virtualization of infrastructure demanded. Everything has to be managed by the developer's team, or we could say cloud consumers.
With CaaS (Container as a Service)
CaaS (Container as a Service) was used, and is still, where we want maximum flexibility. Because it delivers an entirely controlled environment where developers can deploy the app, as they want, and can run their app with very little or almost no dependencies on client-side. Developers have to manage traffic control, and it has to borrow on rent for a fixed period, like six months, or a year.
PaaS (Platform as a Service)
Then PaaS(Platform as a Service) came to focus, which was a step towards Serverless, the improved version of CaaS, where the developer has not to worry about the operating system and its working. Instead, they have to develop the app or service and deploy that to the platform. But it also has to be bought for a fixed period. Where scaling is automatic in serverless and happens instantly without any preplanning, in case of PaaS, scaling can occur in PaaS but it's not automatic, and developers might have to prepare there an app, before scaling.
Serverless Vs. Containers
Containers work on single machine, on which they are assigned initially, they can be relocated, on the other hand, assigning the server in serverless is the responsibility of the provider, and it is attached when the event triggers the function, so in serverless, it is not confirmed that the same purpose will run at the same machine, even if we try two consecutive times. Scaling of containers have to be done by the developers, it is not easy in case of containers, but it can be done. While, scaling in serverless is automatic, as electricity in our houses, use according to need. In containers, charges are fixed, because the have to be working continuously, whether there is traffic or not, like for a month or two, or a year. But in case of serverless, again as electricity, we have to pay as much as we use, which is evaluated by the Execution time of our function. In the case of Containers, the management responsibility is of the consumers of the cloud, either reducing failure risks, or connections between the container and the other resources. While in the case of serverless, cloud consumers have to submit their business logic, or part of it, as a function. Provider of the cloud will manage everything else.
Serverless Computing Architecture Components
Serverless Computing needs three main parts to work at its full pace
Based on Classification
Based on the method of triggering
Request-based Serverless Computing
Event-driven Serverless Computing
Request-based Serverless Computing
Request-based Serverless Computing is the approach followed by the services that act upon some request. When a client sends a request, serverless computing performs an action by the type of application. And sends a response to the client (if applied).
Event-driven Serverless Computing
Event-driven Serverless Computing is another approach followed by the services that act upon some event. The event can be of any kind, like a file uploaded, file deleted, new client, etc. There are a huge amount of events, and each game can trigger one or more than one functions. Functions can act upon the event, or it’s properties or any data given.
Serverless Computing Service Best Practices
It would be more beneficial if the functions would be stateless, or states can be saved in some database. The function would, fetch it, work on it, and store it again (if needed).
Serverless Computing is very developed, or we can say an evolved version of Cloud, but it is not the best choice for every kind of service. It is more beneficial for services with uneven or unpredictable traffic patterns. Especially the services with very high computing or large processing requirements or file system access or operating system access are where serverless is not the best option.
Relational Databases are not preferred because of the limit on connections open simultaneously, which could lead to scalability and performance issues. The cold start could be reduced if we manage resources accordingly.
It would be beneficial too if we do our as much possible computation on the client side so that computation on the server side could reduce.
Yes, many issues are like disadvantages of Serverless, but still, it is way great approach than the other ones, though the tools and standards for Serverless are still not that mature, it is be going to the future of cloud computing. And yes, it is a great approach and very efficient (as cost and for developers too), it is not the best approach to implement in every kind of service, at least not for now. There should be proper analysis before implementing a service or app by this approach.
Holistic Approach Towards Serverless Computing
With the growth of technology, companies are moving towards serverless computing to execute their functioning hassle free and without worrying about data loss. If you are planning to go serverless, have a look at the following steps: