2 minutes read

POSTED Dec, 2018 dot IN Serverless

Developing and Debugging a Serverless Slackbot

Doğuşcan Namal

Written by Doğuşcan Namal


Ex-Software Engineer @Thundra

linkedin-share
developing-slackbot-blog X

I love serverless and the community all around it. So I participated in the #noservernovember challenge organized by Serverless Inc.  The challenge was to make a Slack bot that suggests random movies. And in this post, I’d like to share my journey building this app.

My first move was to fetch the serverless-slackbot repository. Then I created a new Slack App by following the guidelines on the repo. I named my Slack Application philomovie.  “Philo” from the word “philosophy”, which, together with “movie”, means cinephilia, someone who is passionate about movies.

When the configuration was complete I then deployed the application. It was as easy as running the serverless deploy command to make the AWS Lambda environment ready. I certainly appreciate the Serverless Framework for this job.

When I ran that code my environment was set up and ready to go. This was the view I received in my AWS Lambda environment.

aws-lambda-environment

When I executed a slackbot command it seems to run pretty well. Now I needed to add a movie suggesting functionality to this slackbot.

When I checked the code for improvements, it was hard for me to figure out what exactly was happening because there are lots of lines of code to dig through. For example, I could not easily understand the invocation flow that occurred from the moment when I executed the slack bot command to when my Lambda was triggered, and onwards. Additionally, my functions executed  DynamoDB and SNS requests, but I could not tell exactly what was happening during these calls.

To address this lack of visibility into my application, I decided to use Thundra to help me understand what was happening with my functions! I decided to set it up on my slackbot.

After getting Thundra set up, I invoked the slackbot-lambda function. The Thundra Web Application showed me the following results.

trace-chart

Without making any code change, I could now see the execution flow of my function from end to end! The first thing I learned is that my function was being cold-started as you can see on top of the image. When I dug deeper, I could see that my function accesses DynamoDB and SNS, just as the AWS Lambda Console. However, with Thundra, I could see where and how the function accesses these services. I could also see that AWS SDK inherently makes HTTP requests to access these services. Thundra even shows us the DynamoDB statements that are being executed. 

dynamodb-statements
Another thing I learned by looking at the trace data out is that my AWS Lambda function is being executed twice! Apparently, when the first slack-command comes in, I just put that into an SNS queue.  Then, it invokes itself again to execute the bot command. The first invocation is only to set-up the slack command. But, in order to optimize my code, I need to focus on the code related to the second invocation.

serverless-slack-dev-tracechart

By simply adding observability using an automated instrumentation approach, I was able to confirm the execution flow of my invocations and understand how my slackbot dispatches events.

All I needed to do now is to define a new event and make it execute a call to the movie database API.

We can see - it works! 

slack-command-gif
We often write how Thundra can be used to monitor a complex system in production.  But, you can use Thundra to help speed up development of both simple and complex applications as well. Using Thundra as you develop an application can allow you to see, in real time, exactly how your invocations are executing, what services they are interacting with, and what areas you need to further optimize or leave alone. It could be especially useful when you want to put additional functionality on an already built open source serverless projects.

Cheers!