Summary
AWS Lambda is a compute service that lets you run code without any infrastructure management and it natively supports Java, Go, NodeJS, .Net, Python, and Ruby runtimes. In this article, we will compare the performances of the same hello world Lambda functions written in Java, Go, NodeJS, .Net, and Python runtimes.
Structure of the Template
We are using simple hello world functions to test invocation times of Lambdas’ by using the AWS SAM templates. When we compare them first we will use the latest versions of the runtimes the AWS SAM provided us. You can check the complete deployment package from this Github repository.
Comparing Lambda’s by Runtime
To compare these functions dynamically, the newly introduced Thundra APM’s custom dashboard widget is a pretty proper choice. Integrated functions already exist in the Github repository>, you can easily clone the repo and change the Thundra API Key with yours in SAM templates. For more information check APM docs from here or sign up for Thundra from here Let’s check the differences between runtimes by building widgets in a custom dashboard in Thundra APM. In this scenario, we only deployed the stack that contains the latest versions of runtimes. So filtering the functions with the runtimes and average durations would be enough for us to create the “Max durations” widget. All of the functions’ average invocation times are shown below:I acknowledge that the package size has a proportional effect on cold start durations for the same runtimes. We can generally accept this for the different runtimes except for Java And Go. But Go has the 3rd fastest cold start duration even though it has a big size like 4.7MB.
Conclusion
One of the biggest differences between compiled and interpreted languages is that interpreted languages need less time to analyze source code but the overall time to execute the process is much slower. In this context, we can clearly understand the reason why languages like NodeJS and Python have dazzling performances against cold starts. In my experimentation, I’ve observed that Java and .Net performed worse than the other runtimes when we compare the invocation durations. I think it's because Lambdas don't have a large enough process to bridge the difference between interpreter and compiler. So if we tried to do a heavier and more complex process instead of returning the ''Hello World' message, compiled languages would have better performances after warming up. Here are some other graphs created with the Thundra APM Dashboard queries:
Bonus Section: Comparison between versions
There are different versions of the runtimes I used in my experimentation. I compared the 3 different Java versions (java8, java8.al2, java11), 3 different NodeJS versions (Node10, Node12, Node14), 4 different Python versions (Python3.6, Python3.7, Python3.8, Pyton3.9), 2 different DotNet versions (.netcore-3.1, .net-6 ).
When comparing the versions of these runtimes, we will not be able to see any difference in Lambda functions' behavior, but it would not hurt to check them anyway.
DotNet
According to Microsoft’s announcements, DotNet-6 is %40 faster than DotNet-5. In this post, we compared DotNet-3.1 and DotNet-6 and we can clearly see that both cold start and warmed invocation times for DotNet-6 are much better than DotNet-3.1
Also, memory usage for DotNet-6 is pretty lighter than DotNet-3.1.
Python
There is no invocation duration difference between python versions but there are memory usage differences here. Python 3.8 has better memory usage results than the other 3 Python versions:
Java
You can see the invocation duration difference between Java versions below.
Node
Invocation durations only make difference for a cold start.