Profiling database and API calls in Alexa Skills

Apr 2, 2019


This is a guest blog post by Thorsten Hoeger, CEO and cloud consultant at Taimos. He is also an AWS Community Hero. As a supporter of open source software, Thorsten is maintaining or contributing to several projects on GitHub, like test frameworks for AWS Lambda, Amazon Alexa, or developer tools for AWS CloudFormation.

With Amazon’s voice assistant Alexa it is possible to write your own features to provide custom functionality to your users. These additional features are called skill, and there are tens of thousands available in the Alexa Skill Store.

For implementing Alexa Skills, you need a backend that receives JSON requests from the Alexa Voice Service (AVS) containing the user’s input and context and then responds with another JSON that tells AVS what to do and what to say to the user. These backends have to be extremely scalable and fast to provide a good user experience as the tolerance of the user to wait for a response from skill is very low. The perfect fit for this use case is AWS Lambda as it scales well and you only pay per request, and you do not have to operate a complex infrastructure. To store customer information you also need a database. As the query pattern is purely by primary key (the user’s id), DynamoDB is the database of choice for this purpose. With the new OnDemand mode of DynamoDB, you only pay by request, too.

To make sure your skill is behaving correctly and is answering fast, you should monitor each invocation of your skill function and trace calls to databases and third parties. This is where Thundra enters the stage.

In the following blog post, I will show you how to implement a simple skill that is greeting the user by name and counting the number of visits the user had. We will use the Alexa Skill Kit for TypeScript and instrument the code using the Lambda layer provided by Thundra.

We will then look into the calls made by our Lambda implementation and look for potential improvements to the speed of our backend.

Implementing the Skill

For the sake of simplicity, our skill only answers to the so-called LaunchRequest which is triggered if you open up a skill without any further text like "Alexa, open SuperSkill". Designing a voice user interface (VUI) is a relatively complex task and would be an own blog post, so we are keeping things simple for our test.

You can find the sample code in a repository on my GitHub profile to play around with it. As a VUI you can use the default generated when creating a new skill in the Amazon Developer Console as we are not using custom intents.

Our implementation resides in the lib/index.ts file.

Let's take a more in-depth look at the parts of this code and see what they are doing.

export const handler : LambdaHandler = SkillBuilders.custom()
   .withPersistenceAdapter(new DynamoDbPersistenceAdapter({ tableName: env.TABLE_NAME }))
   .withApiClient(new DefaultApiClient())
   .addResponseInterceptors(new PersistAttributesInterceptor())
       new LaunchRequestHandler(),

At the bottom of the file, we are exporting out handler function so that AWS Lambda knows what to call. The SkillBuilders factory provided by ASK helps us with configuring it correctly. We provide the name of a DynamoDB table as an environment variable (TABLE_NAME) and provide a DefaultApiClient to access the ProfileAPI to retrieve user information.
To make sure that our information is stored into DynamoDB after each invocation we provide a ResponseInterceptor coming from an addon library I have written called ask-sdk-addon.
Last but not least we tell ASK to use the LaunchRequestHandler for incoming requests.

const attributes = await handlerInput.attributesManager.getPersistentAttributes();
attributes.visitCount = attributes.visitCount ? attributes.visitCount + 1 : 1;

This LaunchRequestHandler retrieves the current number of visits from the database and increments it by one. It then marks the changed data to be written back to the database by the PersistAttributesInterceptor after completion of the request.

const upsServiceClient = handlerInput.serviceClientFactory.getUpsServiceClient();
try {
   const name = await upsServiceClient.getProfileName();
   const speechText = `Hello, ${name}. This is your visit number
${attributes.visitCount}`;    return handlerInput.responseBuilder.speak(speechText).getResponse(); } catch (e) {    return handlerInput.responseBuilder.speak('Hello, world! I am not allowed to view your profile.').getResponse(); }

After that, the function is calling the ProfileAPI to get the name of the user. If this succeeds, the response is generated and sent to the Alexa Voice Service. If the user did not permit access to personal information, we handle this case as well.


Packaging with tsc and claudia

After implementing the skill, we need to create a packaged artifact we can upload to AWS Lambda. There are several ways to do this, and I will show you my now preferred way, after discussing this with the serverless community (

First of all, we have to transpile our TypeScript code into plain JavaScript. This is done using the TypeScript compiler tsc. We configure it within the file tsconfig.json and tell it to write the transpiled files into the dist folder. Calling tsc in the source folder then completes this first step. 

The created files and all needed dependencies, like ASK and the add-on library, are then packaged into a ZIP file. To prevent all development dependencies from ending up in this ZIP we are using Claudia, a serverless framework, to create this package. Claudia provides a command pack which packages your source code and all production dependencies.

The complete call to transpile and package your code is rm -rf dist/ && tsc && claudia pack --force --output dist/ and we configure it as an npm script called build

Deployment using SAM and Thundra layer

The next step is then to deploy this code to AWS Lambda and to instrument it with the Thundra agent. For the deployment, I am using the AWS Serverless Application Model (SAM) that provides an easy way to deploy serverless infrastructures using AWS CloudFormation.

The file template.yaml is specifying everything we need.

   Type: AWS::Serverless::SimpleTable
       Name: id
       Type: String

At first, it creates a DynamoDB table with id as the primary key. This table will then be used as the storage backend to count user visits.

   Type: AWS::Serverless::Function
     CodeUri: ./dist/
     Handler: dist/index.handler
         TABLE_NAME: !Ref AttributesTable
         thundra_apiKey: <your api key here>
     Runtime: provided
       - !Sub 'arn:aws:lambda:${AWS::Region}:269863060030:layer:thundra-lambda-node-layer:9'
       - Statement:
             - dynamodb:Get*
             - dynamodb:PutItem
             - dynamodb:UpdateItem
           Effect: Allow
           Resource: !Sub arn:aws:dynamodb:${AWS::Region}:${AWS::AccountId}:table/${AttributesTable}
     Timeout: 10
         Type: AlexaSkill

Next, the template defines an AWS Lambda function that is using the created ZIP file as the code source and defines that the function handler of the file dist/index.js is the entry point to this skill backend. To access the correct database we have to provide the environment variable TABLE_NAME and fill it with the dynamic name of the table created before. Additionally, we provide our API key for Thundra. By using the Lambda layer provided by Thundra and specifying the runtime as 'provided', we can instrument our code without requiring Thundra as a dependency within our code. It just happens automatically. Our function then needs permission to read and write our DynamoDB table and has to be wired to the Alexa Voice Service by configuring the event type. As Alexa is no longer accepting answers after 7 seconds, we can safely timeout the Lambda function after 10 seconds. 

By running aws cloudformation package and aws cloudformation deploy we can then deploy this infrastructure into our AWS account.



The Amazon resource name (ARN) of the Lambda created function is then put into the endpoint configuration inside the Skill developer console.


After activating the test mode for this skill, we can invoke it using an Echo device or the mobile companion app. We should be greeted with our name and the number of invocations we did.


Understanding the Thundra results

After several invocations, we can then head over to the Thundra dashboard and look at the data. Thundra provides a comprehensive list of my serverless functions. You can filter and sort your functions according to your necessity. For example; you can write Runtime=node ORDER BY AVG(Duration) DESC to see your Node.js functions sorted with respect to their average duration.

console.thundra.io_functions_thundra-demo-SkillFunction-VFN4T7NCUV9O_default_eu-west-1_NODE_topology(BigBrowser) (1)

We see our function with the name CloudFormation generated for us starting with the stack name thundra-demo and when we click on it we see a list of invocations.
We can now drill down to see the calls made by our Lambda function to the DynamoDB table and the profile API.



As we can see, the calls to the ProfileAPI take some time, and after the first successful call, they are more or less unnecessary as the name will change very infrequently. So to improve the response time we can write the name to our DynamoDB and for further calls, we are only calling the API every several days or weeks.

By changing the Handler in the SAM template to dist/index2.handler we can use the prepared alternative implementation and redeploy using aws cloudformation package and aws cloudformation deploy.

After several new invocations, we can look at the Thundra dashboard again, and we will see, that the calls are getting faster by caching the profile data.


When comparing the architecture views we can see that for recurring calls, the HTTP call is no longer needed and it only hits the DynamoDB table. In order to keep an eye on how the caching affected the performance, we can check the interaction between the Lambda function and the DynamoDB table by clicking the edge between them in Thundra’s architecture view.

console.thundra.io_functions_thundra-demo-SkillFunction-VFN4T7NCUV9O_default_eu-west-1_NODE_topology(BigBrowser) (3)


In this blog post, we wrote, packaged, and deployed an Alexa Skill and used Thundra to find performance issues. We then fixed them and used Thundra again to validate the assumed improvement.

For further information on developing Alexa Skills and other serverless applications follow me on Twitter @hoegertn or look at my other GitHub repositories for more advanced implementations.