4 minutes read

POSTED Nov, 2018 dot IN Serverless

Thundra Lambda Layers and Custom Runtime Support

Serkan Özal

Written by Serkan Özal

Founder and CTO of Thundra


Today, we are happy to announce support for AWS “Lambda Layers” along with the new “Runtime API”, released earlier today by AWS. Now, you can monitor your Lambda functions using Thundra without any code change, dependency addition, or redeployment by just adding the Thundra Lambda Layer externally to your AWS Lambda function.

Let me first quickly explain AWS Lambda’s new cool features “Lambda Layers” and “Custom Runtime”.

Let’s start with the new “Custom Runtime” feature in AWS Lambda which allows you to use any programming language to run your functions. By “Custom Runtime”, you provide your deployment package (typically .zip files) which contains a `bootstrap` file as entry point where it can be any executable file such as script or binary file. Here AWS Lambda platform is not interested in which programming language or runtime are you using, but just starts your bootstrap. It gives your more flexibility than managed runtimes but you need to implement the AWS Lambda “Runtime API” yourself which defines a simple HTTP based specification of the Lambda programming model which can be implemented in any programming language:

  • Initialize function once

  • Then for every invocation

    • Poll the request/event data from AWS Lambda runtime API through HTTP. When there is no invocation (typically you have already processed the previous invocation and looking for a new one), your container will be frozen here

    • Invoke the function (specifically classic handler implementation) itself with the polled request/event data and get the response from there

    • Send the response to AWS Lambda runtime API back through HTTP

For managed runtimes, all of the above are already handled by AWS Lambda runtime itself for you but you don’t have control over the environment. Now, there are more things that you CAN do however there are more things that you MUST do :)

The other new AWS Lambda feature is the “Lambda Layers” which allow developers to centrally manage common code and data across multiple functions. By “Lambda Layers”, deployment packages (typically .zip files) can be released as layers to be used by AWS Lambda functions.

  • There is no defined format for the package content, it can contain anything (binaries, scripts, config files, etc …)

  • Content of layer packages are extracted onto `/opt` folder so they will be available there for the function’s execution environment to be used.

  • Versioning of the published layer is handled by AWS Lambda by giving monotonically increasing version numbers with every release.

  • The published layer can be deleted but then it cannot be added anymore. The functions which already have this layer can continue to have the layer.  

  • Resource based policies can be assigned to layers given them account based or public access.

Before talking about how Thundra supports and integrates with the “Custom Runtime” and “Lambda Layers” features, let me explain what Thundra offers already. Currently, the Thundra setup requires a few lines of code change to wrap your actual function and dependency addition to your artifact. After then, most of the tracing features can be configured through environment variables without any code change and redeployment. This is one of the unique features of Thundra that you can instrument your function without polluting your code or quickly configure trace details, especially on the development environment.

“Lambda Layers” and “Custom Runtime” are different features and can be used separately, but they are especially powerful when used together as they are complementary to each other. For an instant Thundra integration experience, we support both of them together. Let me explain them one by one.

With Thundra Lambda Layer, Thundra setup is just 15 seconds trivial task. Let me show you an example of an existing Java based Lambda function to show how to integrate it with Thundra easily. You just need to

1. Add Thundra Layer to your function

    • Go to layers configuration


    • select Thundra Layer by name and select the version of the layer that you want to use and select the version of the layer that you want to use or you can specify the Thundra layer by its ARN (Notice that the latest part of the ARN is the version of the layer. So you can change it according to the version you want to use) then add the layer.


2. Change runtime to custom runtime>


3. Add Thundra API key as an environment variable and then save the configuration.


That’s all. We are now integrated with Thundra without any code change, dependency addition, or redeployment. Thundra is deployed to your Lambda environment by AWS and you don’t need to take care of anything, Thundra handles the rest for you. As you can see, you don’t need to

  • add Thundra dependency

  • wrap your function

  • redeploy your artifact bundle


Additionally, with “Custom Runtime”, it is now possible to tune Java based Lambda applications to start faster for reducing cold start overhead. Java is one of the languages that suffers from cold start overhead most on AWS Lambda. Until now, with managed Java runtime, it was not feasible to optimize the Java process to start faster. Even though, AWS Lambda already makes some optimizations to start the process faster for Java runtime by enabling class data sharing with `-Xshare:on` so classes are loaded from a shared memory-mapped file in some pre-parsed format, there are still some other options to tune Java application startup time. With Thundra “Custom Runtime” support, you already get these optimizations out of the box, by setting `thundra_agent_lambda_jvm_optimizeForFastStartup` environment variable to `true`, as we use AWS Lambda’s JVM which is pre-installed on the custom runtime environment already and you will get have lover cold start overhead. If you are in trouble with the cold start overhead for your Java based Lambda function, give it a try and see the effect.

Besides these VM argument based optimizations, there are some other and more effective ways that are now applicable with “Custom Runtime” support:

  • Application Class Data Sharing is an extension of the existing Class-Data Sharing ("CDS") feature to allow application classes to be placed in the shared archive. So you can generate shared archives from your application classes and put them into your deployment bundle to be used by the JVM.

  • GraalVM’s Substrate VM compiles Java applications into executable native binaries so they can start instantly without any classloading and bytecode interpretation overhead at the start.

Thundra supports “Custom Runtime” and “Lambda Layers” for Java, Node.js, and Python runtimes. 

You can contact us via support@thundra.io, join our community slack, or write a message through our website chat. And if you haven't started your Thundra journey yet, you can take your free account from this link.