Thundra Lambda Layers and Custom Runtime Support

Nov 29, 2018

 

layers-support-blog

Today, we are happy to announce support for AWS “Lambda Layers” along with the new “Runtime API”, released earlier today by AWS. Now, you can monitor your Lambda functions using Thundra without any code change, dependency addition, or redeployment by just adding the Thundra Lambda Layer externally to your AWS Lambda function.

Let me first quickly explain AWS Lambda’s new cool features “Lambda Layers” and “Custom Runtime”.

Let’s start with the new “Custom Runtime” feature in AWS Lambda which allows you to use any programming language to run your functions. By “Custom Runtime”, you provide your deployment package (typically .zip files) which contains `bootstrap` file as entrypoint where it can be any executable file such as script or binary file. In here AWS Lambda platform is not interested in which programming language or runtime are you using, but just starts your bootstrap. It gives your more flexibility than managed runtimes but you need to implement the AWS Lambda “Runtime API” yourself which defines a simple HTTP based specification of the Lambda programming model which can be implemented in any programming language:

  • Initialize function once

  • Then for every invocation

    • Poll the request/event data from AWS Lambda runtime API through HTTP. When there is no invocation (typically you have already processed previous invocation and looking for new one), your container will be freezed here

    • Invoke the function (specifically classic handler implementation) itself with the polled request/event data and get the response from there

    • Send the response to AWS Lambda runtime API back through HTTP

For managed runtimes, all of them above are already handled by AWS Lambda runtime itself for you but you don’t have control on the environment. Now, there are more things that you CAN do however there are more things that you MUST do :)

The other new AWS Lambda feature is the “Lambda Layers” which allow developers to centrally manage common code and data across multiple functions. By “Lambda Layers”, deployment packages (typically .zip files) can be released as layer to be used by AWS Lambda functions.

  • There is no defined format for the package content, it can contain anything (binaries, scripts, config files, etc …)

  • Content of layer packages are extracted onto `/opt` folder so they will be available there for the function’s execution environment to be used.

  • Versioning of published layer is handled by AWS Lambda by giving monotonically increasing version number with every release.

  • Published layer can be deleted but then it cannot be added anymore. The functions which are already have this layer can continue to have the layer.  

  • Resource based policies can be assigned to layers to given them account based or public access.

Before talking about how Thundra supports and integrated with the “Custom Runtime” and “Lambda Layers” features, let me explain what Thundra offers already. Currently, Thundra setup requires a few lines of code change to wrap your actual function and dependency addition to your artifact. After then, most of the tracing features can be configured through environment variables without any code change and redeployment. This is one of the unique features of Thundra that you can instrument your function without polluting your code or quickly configure trace details especially on development environment.

“Lambda Layers” and “Custom Runtime” are different features and can be used separately, but they are especially powerful when used together as they are complementary of each other. For instant Thundra integration experience, we support both of them together. Let me explain them one by one.

With Thundra Lambda Layer, Thundra setup is just 15 seconds trivial task. Let me show you with an example on an existing Java based Lambda function to show how to integrate it with Thundra easily. You just need to

  1. Add Thundra Layer to your function

    • Go to layers configuration

    • select Thundra Layer by name

    • and select the version of the layer that you want to use

    • or you can specify the Thundra layer by its ARN (Notice that latest part of the ARN is the version of the layer. So you can change it according to the version you want to use)

    • then add the layer

  1. Change runtime to custom runtime

  1. Add Thundra API key as environment variable

  1. And then save configuration

That’s all. We are now integrated with Thundra without any code change, dependency addition and redeployment. Thundra is deployed to your Lambda environment by AWS and you don’t need to take care of anything, Thundra handles the rest for you. As you can see, you don’t need to

  • add Thundra dependency

  • wrap your function

  • redeploy your artifact bundle

anymore.

Additionally, with “Custom Runtime”, it is now possible tuning Java based Lambda applications to start faster for reducing cold start overhead. Java is the one of the languages that suffers from cold start  overhead most on AWS Lambda. Until now, with managed Java runtime, it was not feasible to optimize Java process to start faster. Even though, AWS Lambda already makes some optimizations to start process faster for Java runtime by enabling class data sharing with `-Xshare:on` so classes are loaded from a shared memory mapped file in some pre-parsed format, there are still some other options to tune Java application startup time. With Thundra “Custom Runtime” support, you already get these optimization out of the box, by setting `thundra_agent_lambda_jvm_optimizeForFastStartup` environment variable to `true`, as we use AWS Lambda’s JVM which is pre-installed on the custom runtime environment already and you will get have lover cold start overhead. If you are in trouble with the cold start overhead for your Java based Lambda function, give it a try and see the effect.

Besides these VM argument based optimizations, there are some other and more effective ways which are now applicable with “Custom Runtime” support:

  • Application Class Data Sharing which is extension of the existing Class-Data Sharing ("CDS") feature to allow application classes to be placed in the shared archive. So you can generate shared archive from your application classes and put them into your deployment bundle to be used by the JVM.

  • GraalVM’s Substrate VM which compiles Java applications into executable native binaries so they can start instantly without any classloading and bytecode interpretation overhead at the start.

For now, we support “Custom Runtime” and “Lambda Layers” for Java and Node.js agents, and others (Python and Go) will come soon.