5 minutes read

POSTED Dec, 2020 dot IN Microservices

Monitoring Microservices on AWS with Thundra: Part II

Serkan Özal

Written by Serkan Özal

Founder and CTO of Thundra


This is part two of a three-part article series about monitoring microservices on AWS with Thundra. In the first part, we talked about monitoring a serverless architecture based on API Gateway, Lambda, S3, and SES.

In that article, we built a system that passed objects between Lambda functions with S3 buckets, and sent an email after a computation task has been completed. First, we used CloudWatch to debug your serverless system, and then we integrated the Thundra monitoring service into that system. This allowed us to see the benefits of using CloudWatch.

In part two of this series, we will re-implement that system with another technology commonly used to build microservices: Kubernetes (K8s) in tandem with Amazon Elastic Container Service for Kubernetes (EKS). K8s is an open-source container orchestrator, and EKS is a managed service offering for K8s.

The Thundra monitoring service was originally built to monitor serverless systems based on AWS Lambda, but its newest version supports JavaScript applications created with Node.js and the Express framework. This allows you to monitor containers inside a K8s cluster individually with just a few lines of code.

Sample Project

The second sample project is a factorial calculator. This is the same as the example in the first part of this series, but this time it’s based on K8s and Express. It receives a number and an email address via HTTP, calculates the factorial of that number asynchronously, and sends the result to the email address when it's finished.

Heavy computation EKS architecture

Figure 1: Heavy computation EKS architecture

Figure 1 shows the architecture diagram. The EKS cluster contains four Node.js services, a load balancer to access the backend service from the internet, and an email API based on AWS SES.

The users will call the backend container via the load balancer. The backend container will then call the calculator service and the error service.

The error service will always fail—after all, it crashes when it gets a request. As a result, we will see an error in the backend service because there will be an error in the error service and it won’t give a response.

The calculator will calculate a number and send the result to the email service, which will use the email API of AWS SES to send the response to an email of your choice.

Again, AWS CDK is used as an Infrastructure as Code (IaC) framework to deploy the system because it automatically deploys supporting AWS resources needed to use the EKS cluster.


The following prerequisites are required to install this project:

  • An AWS account
  • A Thundra account, with your AWS account set up
  • Node.js
  • NPM

Installing the Sample Project

Clone the Git repository to your local machine:

$ git clone https://github.com/thundra-io/heavy-computation-eks

Install dependencies with the following command:

$ npm i

The next step is to replace the environment variables inside lib/env.vars.json.

<API_KEY> needs to be replaced with your Thundra API key.

<REGION> has to be replaced with the AWS region you want to deploy your cluster to, so the right SMTP server can be found.

To get the values for <SMTP_USER>, <SMTP_PASSWORD>, and <SOURCE_EMAIL>, you have to generate SMTP credentials in the Console and add an email address to your verified email list.

Note: For security reasons, Amazon SES is in sandbox mode by default, which only allows you to send emails to verified addresses. Sandbox mode can only be deactivated by writing a support request to AWS.

The cluster deployment can take up to 30 minutes and can be done with the following commands:

$ npm run bootstrap
$ npm run deploy

Thundra Integration

The example has changed a bit to conform more with the Kubernetes-based approach, but the general idea is the same.

Since K8s requires container images that can be downloaded from a container registry, this example uses images uploaded to Docker Hub. But the sample repository you cloned contains the actual code that is used inside these pre-build images. Under lib/containers are the backend, calculator, and email directories that contain the Dockerfiles and JavaScript code needed to build the container images.

Let’s look at the JavaScript code inside lib/containers/backend/index.js:

const axios = require("axios");
const bodyParser = require("body-parser");
const express = require("express");
const thundra = require("@thundra/core");
const app = express();
app.post("/", async ({ body: { email, number } }, response) => {
  response.end(JSON.stringify({ email, number }));
  await axios.post(
    { email, number },
    { timeout: 3000 }
  await axios.post("http://error:8000/", { email, number }, { timeout: 3000 });

The backend is a simple Node.js application. It uses just four packages:

  • The Axios HTTP request library, to call the other containers
  • The Express framework, together with the body-parser middleware, to set up an HTTP API and accept POST requests
  • The Thundra SDK, to connect to the Thundra service for monitoring

Effectively, installing the current version of the @thundra/core package is everything that’s needed to get any Express app monitored by Thundra.

The @thundra/core package comes with an expressMW method. A call to it will create an Express middleware that lets Thundra track the traffic that goes into your Express API, and in turn, the Node.js container this API runs on.

The whole configuration of this middleware happens via environment variables. There are two of these variables that need to be present when the Node.js container starts.

  1. THUNDRA_APIKEY: to authenticate the container against the Thundra monitoring service.
  2. THUNDRA_AGENT_APPLICATION_NAME: to tell the Thundra monitoring service which application will be sending data.

These variables will be loaded from the lib/env.vars.json file and added to the container when Kubernetes starts it on AWS EKS.

The service will log requests to the Express endpoints. This means we have to send requests to see anything happen in the Thundra console.

Want to subscribe to our newsletter?

Thanks for your interest. You have been subscribed!

Using the Sample Project

Once you’ve deployed the project, the CDK CLI will display the URL to your load balancer service, which then points to the backend microservice. You can use this hostname to send a sample request to it with cURL:

$ curl --header "Content-Type: application/json" \
  --request POST \
  --data '{"email":"<EMAIL>","number": 10}' \

Replace <EMAIL> with the email you verified for SES. If your email was correctly verified, you should have an email in your inbox informing you about the finished calculation.

Thundra Monitoring Insights

If you open the “Architecture” link from the Thundra console menu, you can see the automatically generated architecture diagram, as shown in Figure 2.

Thundra architecture diagram

Figure 2: Thundra architecture diagram

All the Express-based services are represented by “ex”-nodes connected with HTTP requests. At the top is the load balancer, which forms your application’s entry point—AWS automatically generates its hostname. The graph splits at the backend service because it calls the calculator service and the error service.

Kubernetes container monitoring can be found inside the Thundra console, under the “Applications'' link. See Figure 3 for reference.

Thundra console menu

Figure 3: Thundra console menu

Figure 4 shows an overview of the running services, with information like error rate and latency.

Thundra application overview

Figure 4: Thundra application overview

If you click on the error app in the list on the right, you will find some more detailed information about the error service. This includes information like request count or latency distribution, as seen in Figure 5.

Thundra application details

Figure 5: Thundra application details

The error service will always crash when it gets a request; if we click the “Transactions” button on the top left, we can see the actual error that led to the crash. Let’s look at Figure 6.

Error service transactions

Figure 6: Error service transactions

As you can see, the error is displayed in the list: “Cannot read property ‘a’ of null.” In Figure 7, we see the same Thundra transactions view for the backend service.

Backend service transactions

Figure 7: Backend service transactions

This time the error was a timeout. The backend service sent a request to the error service, but it didn’t get a reply because the error service crashed.

Monitoring AWS EKS with Thundra

With just a few lines of code, Thundra delivers many important metrics about your Kubernetes-based services right out of the box. The error reporting keeps you up to date on your service’s stability, and the architecture diagram shows you if all services are linked up correctly.

With this new feature, it’s now possible to use Thundra on non-serverless apps. This enables you to get full insight into services, independent of the technology that backs them—be it serverless functions or Kubernetes-based containers.

In the next article, we will build the same system with AWS ECS and see that things aren’t much different from a container’s perspective.