6 minutes read

POSTED Dec, 2020 dot IN Serverless

Introducing Thundra Integration for Container Image Support in AWS Lambda

Alican Guclukol

Written by Alican Guclukol


Senior Software Engineer @Thundra

linkedin-share
 X


The biggest criticism towards the serverless movement is the inability to migrate the existing applications to serverless. Migrating to serverless typically requires re-architecting the microservice application and discourages people to leap forward towards serverless. Well, until now. AWS, with its great customer-obsessed approach to problems, has removed that barrier with a very important feature announcement today. From now on, developers will be able to deploy the container images directly as a Lambda function and take advantage of all the benefits of going serverless such as scale to zero, pay-per-use, and more. We, as Thundra, are very proud to be a launch partner for this revolutionary feature with our base image that includes our instrumentation libraries by default. In this blog post, we’ll go over what container image support for AWS Lambda brings and how Thundra will ease the job of monitoring such apps for developers at day zero.

What’s in this feature?

With AWS Lambda’s new feature, it is now possible to package and deploy functions as container images. You can create container deployment images by starting with either AWS Lambda provided base images or by using one of your preferred community or private enterprise images, upload it to ECR, and create a function using it.

AWS Lambda supports all images based on the following image manifest formats:

  • Docker Image Manifest V2 Schema 2 (used with Docker version 1.10 and newer)
  • Open Container Initiative(OCI) Spec (v1.0 and up).

You can deploy any container image as long as it conforms to the AWS Lambda’s Runtime API to receive invocation requests and send responses after processing requests. The ENTRYPOINT configuration should define the filesystem location in the container image that implements the AWS Lambda Runtime Interface Client (RIC). If you prefer to use an arbitrary base image you can leverage the open sourced AWS Lambda Runtime Interface Client to make it compatible with Lambda’s Runtime API

AWS allows images up to 10GB, so you can build and deploy larger workloads that rely on sizable dependencies without any problem.

AWS also open-sources the AWS Lambda Runtime Interface Emulator (RIE), which is a lightweight web-server that converts HTTP requests to JSON events and maintains functional parity with the Lambda Runtime API in the cloud. You can locally test your Lambda applications on container tools like Docker CLI before publishing them with the help of RIE. Genuinely, I found it very useful. It saved me a lot of time.

Although you can create your custom base image, we recommend starting with the AWS provided base images when you’re testing the water. These images include the Amazon Linux system release, runtime standard library and interpreter, AWS’s 1st-party software implementing the Lambda programming model, and also additional libraries such as the AWS SDKs. Currently, fourteen base images are provided for the following runtimes:

  • dotnetcore2.1, dotnetcore3.1, go1.x, java8, java8.al2, java11, nodejs12.x, nodejs10.x, python3.8, python3.7, python3.6, python2.7, ruby2.5 and ruby2.7

With this great flexibility introduced today, you can build serverless applications by using familiar container tooling while continuing to benefit from AWS Lambda’s automatic scaling, high availability, fast function start-up, and native integrations with AWS services. Still, there are a few missing capabilities you need to keep in mind. You need to handle runtime updates of your functions packed as container images yourself since they are not patched automatically with AWS runtime updates. Besides, you need to revise your existing layer integrations which is also the case for your Thundra layer integration since AWS Lambda layers are not supported for functions packed as container images.

In the next section, I will cover the options for integrating Thundra with your functions packaged as container images and walk you through the steps to create a sample Docker image for a Java Lambda function.  

How to Integrate Thundra with your AWS Lambda Functions Packaged as a Container Image?

You can integrate Thundra with your AWS Lambda function by adding it as a dependency directly to your project or using Thundra’s Lambda layer. If you prefer the former option you don’t need any additional configuration for the container case. For the latter option, you can use Thundra’s base image as an alternative to layer since it is not supported for functions packages as container images. Let’s walk through this process step by step. 

Step 1: Prepare the container image

You can choose using Thundra provided base image which is built on top of the AWS Lambda provided base image, or you can build it yourself with some modifications by following the steps described in OPTION B.

OPTION A:

Pull the Thundra base image via Docker CLI:
docker pull thundraio/thundra-lambda-container-base-java8:LATEST

This base image simply adds the Thundra layer to the classpath and sets its handler as the ENTRYPOINT of the image to be able to wrap your function. In addition to creating your Dockerfile from this base image, you need to copy your function code and set the handler via CMD.

Create a Dockerfile:
FROM thundraio/thundra-lambda-container-base-java8:LATEST
# your function code path
ARG FUNCTION_CODE_PATH="target/sample-app.jar"
# task directory
ARG TASK_DIR="/var/task"
# create task directory
RUN mkdir -p ${TASK_DIR}
# copy your function code to the task directory
COPY ${FUNCTION_CODE_PATH} ${TASK_DIR}
# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile. It also could be overridden by lambda environment variable THUNDRA_AGENT_LAMBDA_HANDLER )
CMD ["io.thundra.container.sample.Handler"]

You can also update the Thundra version coming with the base image by adding the following line to your DockerFile. You can replace the version with any Thundra version(2.6.16 or later)

RUN /opt/thundra/thundra-version-updater.sh <version>

Build your application on Docker CLI

docker build -t <image-name> -f <Dockerfile>

OPTION B:

We will use the base image provided by AWS for the java8 runtime as a parent image and create an image that uses Thundra’s bootstrap file as ENTRYPOINT to enable Thundra integration.

You can specify any Thundra version(2.6.16 or later) by setting the THUNDRA_VERSION argument.

Create a Dockerfile:
FROM amazon/aws-lambda-java:8
# your function code path
ARG FUNCTION_CODE_PATH="target/sample-app.jar"
# task directory
ARG TASK_DIR="/var/task"
# define Thundra dependency args
ARG THUNDRA_VERSION=<thundra-version>
ARG THUNDRA_LAMDA_LAYER_JAR_URL=https://repo.thundra.io/service/local/repositories/thundra-releases/content/io/thundra/agent/thundra-agent-lambda-layer/${THUNDRA_VERSION}/thundra-agent-lambda-layer-${THUNDRA_VERSION}.jar
ARG THUNDRA_BOOTSTRAP_FILE_URL=https://thundra-dist.s3-us-west-2.amazonaws.com/lambda-container-images/java/thundra-bootstrap
ARG THUNDRA_ENTRYPOINT_FILE_URL=https://thundra-dist.s3-us-west-2.amazonaws.com/lambda-container-images/java/thundra-entrypoint.sh
ARG THUNDRA_LAYER_DIR=/opt/thundra
# create task directory
RUN mkdir -p ${TASK_DIR}
# copy your function code to the task directory
COPY ${FUNCTION_CODE_PATH} ${TASK_DIR}
# create Thundra layer directory
RUN mkdir -p ${THUNDRA_LAYER_DIR}
# download Thundra layer jar file
RUN curl -f ${THUNDRA_LAMDA_LAYER_JAR_URL} --output ${THUNDRA_LAYER_DIR}/thundra-layer.jar
# download Thundra bootstrap file
RUN curl -f ${THUNDRA_BOOTSTRAP_FILE_URL} --output 
${THUNDRA_LAYER_DIR}/thundra-bootstrap
RUN chmod 755 ${THUNDRA_LAYER_DIR}/thundra-bootstrap
# download Thundra entrypoint file
RUN curl -f ${THUNDRA_ENTRYPOINT_FILE_URL} --output ${THUNDRA_LAYER_DIR}/thundra-entrypoint.sh
RUN chmod 755 ${THUNDRA_LAYER_DIR}/thundra-entrypoint.sh
# set ENTRYPOINT to thundra-bootstrap to enable Thundra integration
ENTRYPOINT ["/opt/thundra/thundra-entrypoint.sh"]
# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile. It also could be overridden by lambda environment variable THUNDRA_AGENT_LAMBDA_HANDLER )
CMD ["io.thundra.container.sample.Handler"]
Build your application on Docker CLI
docker build -t <image-name>

Step 2: Locally test your container image

As I highlighted previously, you can locally test your Lambda application easily before publishing them. AWS Lambda Runtime Interface Emulator is executed when you run your image locally and maintains functional parity with the Lambda Runtime API in the cloud.

Run your Lambda application locally:
docker run -p 9000:8080 <image name>
Invoke your local Lambda application
curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'

You should see your function’s expected response as the output of this request. You can also see the invocation logs on the console you run the Docker image.

Step 3: Deploy the image to ECR

If you do not already have an ECR repository, you can create one using the ECR console or using the following AWS CLI command:
aws ecr create-repository --repository-name <image-name> --image-tag-mutability  MUTABLE --image-scanning-configuration scanOnPush=true
Retrieve an authentication token and authenticate your Docker client to your registry:
aws ecr get-login-password –region <region> | docker login --username AWS 
--password-stdin <account-ID>.dkr.ecr..amazonaws.com
Tag your image using the docker tag command:
docker tag <image-name>:latest <account-ID>.dkr.ecr.<region>.amazonaws.com/:latest
Push your image to ECR:
docker push <account-ID>.dkr.ecr.<region>.amazonaws.com/<image-name>:latest

Step 4: Create an AWS Lambda Function

Setup Permissions

Make sure that the function you’re about to create has the required permissions to access the ECR repository that contains the source image. You can achieve this by:

OPTION A:

The IAM user or role creating the function must have the GetRepositoryPolicy and SetRepositoryPolicy permissions for AWS Lambda to access your ECR repository. To do so, you can create a role with the below policy:

{
    "Version": "2012-10-17",
    "Statement": [
    {
        "Sid": "VisualEditor0",
        "Effect": "Allow",
        "Action": ["ecr:SetRepositoryPolicy","ecr:GetRepositoryPolicy"],
        "Resource": "arn:aws:ecr:<region>:<account>:repository/<repo name>/"
     }
]
OPTION B:

Alternatively, you can provide AWS Lambda with permissions to access the ECR repository by adding a resource policy granting the AWS Lambda service access:

{
    "Version": "2008-10-17",
    "Statement": [
      {
          "Sid": "Snapshot",
          "Effect": "Allow",
          "Principal": {
              "Service": "lambda.amazonaws.com"
          },
         "Action": [
              "ecr:BatchGetImage",
              "ecr:GetDownloadUrlForLayer"
         ]
     }
}

Create a Function

Using AWS CLI
aws lambda --region <region> create-function 
--function-name <function-name> --package-type Image 
--code ImageUri=<ECR image URI> 
--role <service-role>
Using the AWS Lambda console

Go to the Lambda console function creation page and choose the Container Image option:

Select a container image:

Set Environment Variables

You can set your environment variables as usual. In order to see your function data on the Thundra console, do not forget to set the THUNDRA_APIKEY environment variable with the value you get from the Thundra console.

Step 4: Test your function

Invoke your function via the Lambda console or using the following AWS CLI command:

aws invoke --function-name <function-name> --payload <function-input>

Voila! Your function should execute successfully and you should be able to see the function invocation details on the Thundra console. You’ve just deployed a portable Docker image as a Lambda function and are able to monitor it with Thundra.

Wrapping up

Since the inception of Thundra, we have witnessed the perseverance of AWS Lambda team for removing the barriers of entry for serverless movement.Developers who want to try serverless have no excuse for skipping it because they can now ship their application to serverless and start seeing the advantages. Application teams can have the end-to-end understanding of their serverless applications by integrating Thundra to their images or use our ready-to-use image (only for Java at the moment). Our solution will significantly reduce the time to identify and resolve issues from development to post-production by taking advantage of distributed tracing and offline debugging integrated.