A Bit of History
Not so long ago, the development, deployment, and maintenance of applications was much more complicated and time-consuming than it is today. In the very beginning, maintenance required fixes not only to the applications code, but also to supporting physical machines. Keeping servers, hardware, and software up-to-date were also critical tasks.
In the 2000s, a new model called “Infrastructure as a Service (IaaS)” quickly became popular. IaaS created the possibility of renting remote servers and virtual machines from third-party providers. Those providers are fully responsible for managing hardware, networking, and reservation.
After the advent of IaaS, the idea of making developers’ lives easier by eliminating all of their non-coding responsibilities drove the innovation of new approaches, models, and services.
What Are Containers?
Docker’s official website provides the following short and elegant definition: “A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.” In other words, by using containers, developers can be sure that their applications can be run on any cloud platform or on-premises server. In some ways, containers are similar to VMs; both isolate resources. However, virtual machines emulate physical devices while containers create abstractions of application layers.
What Is Serverless Computing?
In serverless computing, the whole application or part of an application is split into multiple functions, each of which is triggered in response to an event such as an HTTP request, a new message’s arrival in the message queue, or the saving or modification of a new object in storage. It is possible to run these functions at a particular time or periodically, a feature which is helpful for cron jobs.
For this system to work, developers simply need to write function code, package it together with its dependencies into a zip file, and send that zip file to the serverless endpoint. The provider takes care of provisioning and scaling.
One of the key features of serverless is the “pay as you go” model, where companies pay only for the actual execution time of their functions. Today, AWS Lambda is the most popular of the serverless providers.
Do Containers and Serverless Have Anything in Common?
Yes! Both serverless and containers are popular today because they allow developers to focus on their code rather than on infrastructure. This increases development speed. Both containers and serverless are perfectly suitable for microservices and component-based architectures. When using them, deployment and scaling is usually faster and more cost-effective than it is with classic monolithic architecture, because you are manipulating small parts of the application instead of the whole thing. Despite these commonalities, each technology has its own advantages, disadvantages, and use cases.
The Benefits of Containerization
The very first advantage of containers is their portability. Because the container already includes everything it needs to run, it just needs a machine (with a containers engine installed) to be placed on in order to operate. Containers are platform-agnostic, so they can run on Linux, Windows, MacOS, Mesos, Docker, Swarm, or Kubernetes. They can even run inside of another container.
Containers are also much more efficient than virtual machines when it comes to computing resource usage. Even though both containers and VMs are virtualized, VMs consume more resources because they emulate entire machines with their own OSs. Containers, on the other hand, can share the same OS, making them smaller and much faster to both spin up and shut down.
Another benefit of containers is that they allow developers to take full control of applications. Although this means that system settings must be manually configured, it also means you have real flexibility. This is not achievable with serverless, where almost everything is managed by the cloud provider.
Use Cases for Containers
Containers are really helpful when you want to refactor some big monolithic application into smaller independent parts in order to move to a microservices architecture and achieve much better performance, testability, and scaling speed. An example of this is splitting a previously large application into a few separate services—one of which is responsible for user management, another for converting media files, etc. Each service can be easily scaled up to provide better performance in case the load on its responsibility area increases. This is not possible with a monolithic application, where adding a new instance to the whole system is needed. This can be both costly and time-consuming.
In sum, containers are suitable for long-running applications as well as applications with specific system requirements which are difficult to set up without full control over the system.
Advantages of Serverless
Because of the “pay as you go” model mentioned above, the cost of hosting a serverless application can be dramatically lower than it would be using any other approach. You never pay for your functions’ idle time, so if there is no traffic, there is no charge on your monthly bill. Almost all serverless providers have free tiers, which include a monthly fixed amount of requests and execution time. Usually, the volume provided is more than enough for small website or startup to operate for free.
With containers, distributing the application to parts or microservices is the key step. Within serverless, it’s the distribution of an application or its parts into single functions, each of which is responsible for a particular piece of logic. This dramatically increases development and deployment speed, since it is much easier for an engineer to understand the logic of and develop a single function. It is also less risky to deploy a little piece of functionality than to deploy a whole application.
Another great advantage of serverless is burst auto-scaling. Serverless functions run in small, stateless, ephemeral containers which are under the control of the provider. The provider takes full responsibility for scaling in response to load spikes, and hundreds of instances can be spun up in a few seconds. And, remember that you will still pay only for the total execution time of all your functions.
When Is Serverless Good to Use?
The event-driven nature of serverless makes it very useful for applications (or their parts) which do not always need to be running.
Imagine that you are developing the media processing functionality for an existing application. The new module is not going to be used very often, but it still needs enough computing power to accomplish its tasks. Putting it inside of the application can potentially require switching to more powerful instance—a risky move, because, if a few heavy tasks run simultaneously, they can cause delays for all other users. In this scenario, you need to pay more, and you still risk having some issues caused by the bottlenecks described above.
If, instead, you choose serverless, that media processing functionality will be isolated from the rest of the app. You will not need to pay for it when it is not being used, and you can always rest assured that it will not affect the other parts of your app.
The Drawbacks of Containers
Even if nobody is using the application, at least one VM instance with containers hosted on it is always running. For that reason, containers are more expensive than serverless.
Even though containers can quickly scale up in a shared machine, additional scaling is not fast because the machines themselves need to be scaled. However, using containers with orchestration systems like Kubernetes or AWS ECS can make scaling smarter.
The Drawbacks of Serverless
For most developers, the scariest part of serverless is vendor lock-in. When you commit to serverless, you actually stack on a single cloud provider. The architecture of the serverless apps and APIs used in the functions differs from one provider to another, making it potentially expensive to change providers or switch to an on-premise solution. That said, some experts disagree, claiming that vendor lock-in isn’t actually a problem.
Observability, monitoring, and debugging are not easy to do with the serverless approach. Because the application can be distributed into many pieces, and each piece can have its own bugs and errors, it becomes important to take control and see the whole picture. Fortunately, it’s possible to do that these days with Thundra, our tool which offers full observability for serverless applications by aggregating distributed and local traces with performance metrics and logs.
Can Containers and Serverless Operate Together?
Surprisingly, yes! It can be very effective to have the main application functionality running as a containerized microservice while using serverless for some background operations or rarely used (but CPU consuming) features.
Another interesting combination is the one provided by AWS Fargate. This service, which combines the advantages of both serverless and containers, allows you to have better control over your app without having to worry about scaling—AWS takes care of it. If you’re interested in learning more about this offering, check out our article about AWS Fargate.
Containers and serverless are often considered to be competing technologies. Upon closer inspection, it becomes clear that they are just different technologies—ones that can actually compensate for each others’ drawbacks when used inside the same project. It is important to remember that “older” does not mean “dead,” and “newer” does not mean “better.” The effectiveness of a solution depends on the particular use case, project requirements, team experience, and team preferences.