8 minutes read

POSTED Mar, 2023 dot IN Observability

How to Provision Serverless Resources with Terraform by HashiCorp

Ismail Egilmez

Written by Ismail Egilmez

Business Development Manager @Thundra


As more and more companies migrate their complex applications to the cloud, the need to deploy cloud infrastructure at scale is also increasing. Enterprises can no longer scale their deployments if they are provisioning infrastructure manually via cloud dashboard or CLI tools.

This is where tools like HashiCorp’s Terraform come into the picture. Terraform lets developers provision cloud infrastructure components declaratively as code, allowing them to apply the same DevOps principles to infrastructure code as they use for application code.

Terraform code produces the same infrastructure components irrespective of the environment it’s run in. Furthermore, its idempotent and declarative properties make it easy to provision consistent and immutable infrastructure for repeatable builds with no environmental drift. Idempotence means that no matter which state you start your deployment in, you will always end up with the same final state. A declarative approach means you tell Terraform what the infrastructure should look like, and it will take care of how to achieve that. 

Terraform additionally supports plugins to manage infrastructure on multiple cloud vendors, while its remote state management allows it to manage the state collaboratively amongst multiple developers.

In this article, we’ll see how to provision serverless AWS Lambda and its associated infrastructure components using Terraform.



Before getting started, you’ll need an AWS account, as well as aws-cli and terraform tools.

To install aws-cli, follow the instructions in the official documentation. Also, set up your access key.

Next, authenticate using the command below:

aws configure

You’ll be prompted to enter the aws_access_key_id and aws_secret_access_key, which you got in the previous step.

Next, refer to Terraform’s installation instructions, depending on the operating system you’re running. This demo uses Terraform v1.1.9.

Deploy AWS Lambda

Provider Setup

First, create a folder for all of your Terraform files. Let’s call it aws-thundra-lambda-demo.

Start by setting up the plugins.tf file to initialize the AWS provider:

terraform {
 required_providers {
   aws = {
     source  = "hashicorp/aws"
     version = "~> 4.4"

    random = {
      source  = "hashicorp/random"
      version = "~> 3.1.0"
    archive = {
      source  = "hashicorp/archive"
      version = "~> 2.2.0"

provider "aws" {
 profile = "default"
 region  = "eu-central-1"

Here, you’re defining the dependency on the AWS provider and initializing it. Once you set the region at the provider level, you don’t need to specify it for individual components.

State Bucket and Remote State

Next, create a state-bucket.tf file and an S3 bucket to store the state:

resource "aws_s3_bucket" "state_bucket" {
 bucket = "thundra-state-bucket"

resource "aws_s3_bucket_acl" "state_bucket_acl" {
 bucket = aws_s3_bucket.state_bucket.id
 acl    = "private"

Note: Buckets in AWS should have globally unique names, so do not use the name used in this demo.

Now you’re ready to run the following command to initialize Terraform:

terraform init

This will download the plugins and initialize the state:

Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Next, run the terraform plan to see what components Terraform will create. This is a dry run, where Terraform checks the current state from the state file and figures out what changes it needs to make to achieve the new desired state:

terraform plan -out planfile
Plan: 2 to add, 0 to change, 0 to destroy.

If you apply the changes, the aws_s3_bucket and aws_s3_bucket_acl resources will be created.

To apply the changes, use this command:

terraform apply planfile

This will apply the planfile by creating the resources:

aws_s3_bucket.state_bucket: Creating...
aws_s3_bucket.state_bucket: Creation complete after 5s [id=thundra-state-bucket]
aws_s3_bucket_acl.state_bucket_acl: Creating...
aws_s3_bucket_acl.state_bucket_acl: Creation complete after 1s [id=thundra-state-bucket,private]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

Terraform has now created a state file called terraform.tfstate in your local directory. However, storing state on a local machine is not recommended because if you run your code from another machine, Terraform will try to recreate all the existing components, which will result in an error.

Instead, you should use Terraform’s remote state. This will store the state in the S3 bucket, which will allow it to be available even if you run your code from another machine in the future.

You can use the bucket you created in the previous step to store the state remotely.

First, create a state.tf file:

terraform {
 backend "s3" {
   bucket = "thundra-state-bucket"
   key    = "demo/state"
   region = "eu-central-1"

Make sure the region in state.tf and plugins.tf is consistent.

Next, run terraform init again. You will be asked to upload the local state to the bucket as well:

Initializing the backend...
Do you want to copy existing state to the new backend?
  Pre-existing state was found while migrating the previous "local" backend to the
  newly configured "s3" backend. No existing state was found in the newly
  configured "s3" backend. Do you want to copy this state to the new "s3"
  backend? Enter "yes" to copy and "no" to start with an empty state.

  Enter a value: yes

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed hashicorp/aws v4.10.0
Terraform has been successfully initialized!

Lambda Code Setup

Now, you’re ready to deploy your Lambda function.

Start by creating resources for the Lambda function and then add an AWS Managed API Gateway to access the Lambda from the internet.

For the body of the function, make a new folder parallel to your Terraform resources called hello-world-lambda.

Inside the folder, create a hello.js file, which will contain the code for your Lambda function:

module.exports.handler = async (event) => {
  console.log('Event: ', event);
  let responseMessage = 'Hello, World!';

  if (event.queryStringParameters && event.queryStringParameters['Name']) {
    responseMessage = 'Hello, ' + event.queryStringParameters['Name'] + '!';

  return {
    statusCode: 200,
    headers: {
      'Content-Type': 'application/json',
    body: JSON.stringify({
      message: responseMessage,

Lambda Bucket

Next, you need another bucket to store your archived code. So, create a Terraform resource bucket.tf file:

resource "random_integer" "random" {
  min = 1
  max = 50000

resource "aws_s3_bucket" "lambda_bucket" {
  bucket = "hello-lambda-${random_integer.random.id}"
  force_destroy = true

resource "aws_s3_bucket_acl" "lambda_bucket_acl" {
 bucket = aws_s3_bucket.lambda_bucket.id
 acl    = "private"

The random suffix added to your bucket name ensures you can run this example without any changes.

Lambda Archive and S3 Object

Next, we’ll create the archive file with the code and put it into the newly created bucket. This archive file will be used while creating the Lambda function in the next step:

data "archive_file" "lambda_hello_lambda" {
  type = "zip"

  source_dir  = "${path.module}/hello-lambda"
  output_path = "${path.module}/hello-lambda.zip"

resource "aws_s3_object" "lambda_hello_lambda" {
  bucket = aws_s3_bucket.lambda_bucket.id

  key    = "hello-lambda.zip"
  source = data.archive_file.lambda_hello_lambda.output_path

  etag = filemd5(data.archive_file.lambda_hello_lambda.output_path)

This will create the archive hello-lambda.zip out of the hello-lambda folder and put the archive in the bucket that you created in the previous step.

Lambda Function

For the Lambda function itself, create a lambda.tf file:

resource "aws_lambda_function" "hello_world" {
  function_name = "HelloWorld"

  s3_bucket = aws_s3_bucket.lambda_bucket.id
  s3_key    = aws_s3_object.lambda_hello_lambda.key

  runtime = "nodejs12.x"
  handler = "hello.handler"

  source_code_hash = data.archive_file.lambda_hello_lambda.output_base64sha256

  role = aws_iam_role.lambda_exec.arn

resource "aws_cloudwatch_log_group" "hello_world" {
  name = "/aws/lambda/${aws_lambda_function.hello_world.function_name}"

  retention_in_days = 30

resource "aws_iam_role" "lambda_exec" {
  name = "serverless_lambda"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Action = "sts:AssumeRole"
      Effect = "Allow"
      Sid    = ""
      Principal = {
        Service = "lambda.amazonaws.com"

resource "aws_iam_role_policy_attachment" "lambda_policy" {
  role       = aws_iam_role.lambda_exec.name
  policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"

So, you’ve now created a Lambda function using the archive from the previous step, as well as an AWS CloudWatch log group where the logs from your Lambda function will be written. You also made an IAM role for the function and added a predefined AWSLambdaBasicExecutionRole policy, which will allow the Lambda function to write logs to CloudWatch and access other AWS services.

Next, let’s add an output variable to output the name of the function created. Create an output.tf file:

output "function_name" {
  description = "Name of the Lambda function."

  value = aws_lambda_function.hello_world.function_name

Now you’re ready to deploy your resources. Let’s do a dry run using the terraform plan command. Once you confirm everything is configured properly, deploy the changes using terraform apply.

You should see the name of your function in the output variables:

function_name = "HelloWorld"

You can check in the AWS Console if the function is deployed:

To test the function, use either the web console or the aws cli tool. Here, we’ll test it using CLI:

aws lambda invoke --region=eu-central-1 --function-name=HelloWorld response.json
    "StatusCode": 200,
    "ExecutedVersion": "$LATEST"

Inspect the content of response.json to see the output returned by your function. You should see the following:

{"statusCode":200,"headers":{"Content-Type":"application/json"},"body":"{\"message\":\"Hello, World!\"}"}

This confirms that your function is set up correctly and working properly. The only missing item is that this function doesn’t have a trigger.

API Gateway

In this section, you’ll add an API Gateway as a trigger to your function so that it can be invoked on REST requests.

First, add a gateway.tf file:

resource "aws_apigatewayv2_api" "lambda" {
  name          = "serverless_lambda_gw"
  protocol_type = "HTTP"

resource "aws_apigatewayv2_stage" "lambda" {
  api_id = aws_apigatewayv2_api.lambda.id

  name        = "serverless_lambda_stage"
  auto_deploy = true

  access_log_settings {
    destination_arn = aws_cloudwatch_log_group.api_gw.arn

    format = jsonencode({
      requestId               = "$context.requestId"
      sourceIp                = "$context.identity.sourceIp"
      requestTime             = "$context.requestTime"
      protocol                = "$context.protocol"
      httpMethod              = "$context.httpMethod"
      resourcePath            = "$context.resourcePath"
      routeKey                = "$context.routeKey"
      status                  = "$context.status"
      responseLength          = "$context.responseLength"
      integrationErrorMessage = "$context.integrationErrorMessage"

resource "aws_apigatewayv2_integration" "hello_world" {
  api_id = aws_apigatewayv2_api.lambda.id

  integration_uri    = aws_lambda_function.hello_world.invoke_arn
  integration_type   = "AWS_PROXY"
  integration_method = "POST"

resource "aws_apigatewayv2_route" "hello_world" {
  api_id = aws_apigatewayv2_api.lambda.id

  route_key = "GET /hello"
  target    = "integrations/${aws_apigatewayv2_integration.hello_world.id}"

resource "aws_cloudwatch_log_group" "api_gw" {
  name = "/aws/api_gw/${aws_apigatewayv2_api.lambda.name}"

  retention_in_days = 30

resource "aws_lambda_permission" "api_gw" {
  statement_id  = "AllowExecutionFromAPIGateway"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.hello_world.function_name
  principal     = "apigateway.amazonaws.com"

  source_arn = "${aws_apigatewayv2_api.lambda.execution_arn}/*/*"

With these resources, you deployed an API Gateway, associated IAM permission to allow execution from the gateway, and a CloudWatch log group.

Let’s also add one more output variable to the output.tf file. This will give you the base URL for the API gateway that you just deployed:

output "base_url" {
  description = "Base URL for API Gateway stage."

  value = aws_apigatewayv2_stage.lambda.invoke_url

You already have code in place to take input from query parameters. Once you deploy these resources, you’ll be able to invoke your function via the HTTP URL.

Now, apply the changes via terraform apply. Once successfully applied, you should see the output variable base_url.

To see if your trigger is working correctly, do a curl request to {base_url}/hello. You will substitute “base_url” with the value for a given run from your output:

curl base_url/hello

This should return your json encoded hello message: 

{"message":"Hello, World!"}

You can also pass the Name query parameter in the URL to customize the name in the returned message:

curl base_url/hello?Name=Terraform

This should return a customized response. Note that the Name parameter is case-sensitive:

{"message":"Hello, Terraform!"}

Thundra APM

So far, you‘ve deployed serverless Lambda successfully using Terraform. In this section, we’ll take a slight detour to let you see how to integrate Thundra APM easily into your serverless Lambda function with minimal changes.

Adding Thundra APM gives you many added functionalities, such as metrics collection, logging, tracing, time travel debugging, and more. You will quickly see how to configure Thundra APM to enable all of these.

Without further ado, let's get started.

First, you need to install the Thundra APM Node.js package. Go to the hello-lambda directory and use npm to install the package:

npm install @thundra/core --save

Next, modify your Node.js code and wrap your handler in the thundra method to initialize the SDK:

const thundra = require("@thundra/core")();

module.exports.handler = thundra((event, context, callback) => {
    console.log('Event: ', event);
    let responseMessage = 'Hello, World!';
    if (event.queryStringParameters && event.queryStringParameters['Name']) {
      responseMessage = 'Hello, ' + event.queryStringParameters['Name'] + '!';
    callback(null, {
      statusCode: 200,
      headers: {
        'Content-Type': 'application/json',
      body: JSON.stringify({
        message: responseMessage,

You also need to pass the Thundra API Key to your Lambda function by setting the thundra_apikey environment variable.

So first, create a variable file called variables.tf:

variable "thundra_apikey" {
    description = "API key for Thundra APM"
    type = string

You will also need to modify the Terraform resource for the Lambda function in the lambda.tf file to set up the environment variable:

resource "aws_lambda_function" "hello_world" {
  function_name = "HelloWorld"

  s3_bucket = aws_s3_bucket.lambda_bucket.id
  s3_key    = aws_s3_object.lambda_hello_lambda.key

  runtime = "nodejs12.x"
  handler = "hello.handler"

  source_code_hash = data.archive_file.lambda_hello_lambda.output_base64sha256

  role = aws_iam_role.lambda_exec.arn

    environment {
        variables = {
            THUNDRA_APIKEY = "${var.thundra_apikey}"


While applying the changes, you’ll be asked to pass the value of the thundra_apikey variable. For the purpose of this demo, you can pass the value from CLI; or, in your CI/CD pipeline, you can fetch this value from secret variables. 

Once you deploy the changes and invoke the function, you’ll start seeing invocations in your Thundra APM console:

Without any additional configuration, Thundra APM gives you details on latency, status, and cold starts.

For another example of how to enable metrics for your Lambda function, set the environment variable thundra_agent_lambda_metric_disable to false in your lambda.tf file.

Once configured, you’ll start seeing memory and CPU usage with each invocation:

There are many more features that you can configure via environment variables. You can read more about the various options in Thundra’s documentation.


Terraform is instrumental for scalable, repeatable, and immutable infrastructure. It allows you to declaratively deploy your infrastructure, while its idempotence property makes sure that your infrastructure is always in a consistent final state.

Meanwhile, Terraform’s remote state allows you to collaborate among multiple developers, and since your infrastructure is now code, you can apply the same rigorous testing and DevOps principles that you apply to your application code.

Terraform additionally supports all major public cloud providers (AWS, GCP, and Azure) and boasts a vibrant community of developers as well.