Migration to Serverless: A Use Case
Given the young age of the serverless domain, many organizations introducing serverless technologies will need to start by either creating a new business process based on serverless or by migrating an existing product to an equivalent serverless application. This process is a key element in reducing the amount that your organization spends on technical resources, but it is not without its pitfalls.
In this case study, we’ll explore a hypothetical online storefront that is being migrated to an AWS Lambda-based serverless application from a legacy microservice-oriented architecture. We’ll dive into the general migration process and highlight cases where potential pitfalls arise. Once we’ve successfully migrated, we’ll then take a closer look at a couple common problems in the serverless space and how to address them.
Setting the Stage
This case study will focus on AllAboutBrass, a hypothetical web marketplace that sells musical instruments. Their market covers all levels of music, from elementary school students to the seasoned professional. The website sees about 10,000 unique visitors per day and processes $8 million in payments every week.
The designers of the application wanted to design a robust and scalable marketplace, able to support the needs of the business as it grows. The application is built on a microservice architecture, with services focusing on each of the primary business responsibilities: one service handling payment processing, another handling inventory and product tracking, and a third handling customer relationships.
The application is built on top of a Postgres database, which stores the majority of the company’s business history and data and uses NodeJS on the backend and React on the frontend. The application makes use of several third-party services: Stripe for payments, LogEntries for log aggregation, and NewRelic for server monitoring.
While the application is well built and scalable, the storefront simply hasn’t seen the traffic that was hoped for, and the current load of 10,000 users a day relies on expensive always-available technology that ends up sitting idle for extensive periods of time between requests. As a result, the company has elected to migrate their application to a serverless architecture with the hope of saving on operating costs.
The Benefits of a Serverless Architecture
Serverless architectures have a number of benefits when compared with equivalent applications built on traditional cloud technologies. While a serverless architecture is by no means a panacea, it presents an alternative model to the traditional approach of constantly available servers driving web application behavior. Serverless architectures rely on resources that are made available at the time they are needed, creating and destroying containers to handle requests as they are made. The underlying technology allows for instant availability of functionality as well as the ability to scale in an automatic fashion by simply generating additional handlers as needed.
Serverless applications built on top of AWS Lambda have access to a number of different integrations with AWS products that are likely already a part of your application’s deployment pipeline. Tools such as Amazon S3, DynamoDB, CloudWatch, and other standard AWS products integrate deeply with AWS Lambda, providing everything your serverless application needs to operate. AWS Lambda functions can also be written in many common web development languages, shortening development times while giving your developers a familiar canvas on which to work.
Comparing Microservices with a Serverless Architecture
The first step in any migration is understanding the architectural differences between the two solutions. Knowing where a given architecture is strong is crucial when designing your application’s interactions in a serverless context. The closer your source architecture is to the target architecture, the fewer problems you will likely encounter in the transition. Luckily, microservice architectures share a lot of common elements with serverless applications. In fact, we’ll actually begin exploring the differences between them by taking a look at a serverless application through the lens of a microservice-based architecture.
The intent of a microservice architecture is to reduce coupling between multiple related control flow paths while improving cohesion of the individual calls themselves. To put it another way, a microservice architecture defines an area of functionality and gives it its own web application. While not required, this is often built around a representation of the underlying resources, such as with RESTful APIs. In these applications, functionality is classified by the tasks it performs–a payment service focuses on processing payments, while an inventory service interfaces with both the physical goods warehouse as well as online dropshippers to fulfill customer orders.
In a serverless application, we break this distinction down further into the constituent actions that can be taken. Where in a microservice architecture you’ll have a payment server that has different endpoints for “charge,” “refund,” and so forth, a serverless equivalent will have different functions for these same endpoints. In a serverless application, your base computational unit is equivalent to a function call, and each HTTP interaction in your application will likely be represented by its own serverless function.
Whereas a traditional microservice application will invoke these calls through HTTP requests almost exclusively, with a serverless architecture you have more options available for controlling your application’s behavior. You can use triggers in many common AWS products (such as S3 and API Gateway) to trigger your function calls. In some cases, particularly those focused on media manipulation, this brings your functionality closer to the objects being manipulated. This proximity of functionality can, in many cases, simplify the attendant code, allowing you to optimize in ways that weren’t possible through external APIs.
While microservice-based applications work well in a serverless model, there are some key issues to be aware of that might arise once you’ve completed the migration. Below are some items that you will need to keep track of, items that might not be present in an equivalent microservice architecture.
Avoid Long-Running Functions
The machines running your AWS Lambda functions that provide the meat of your functionality are designed–and intended–to be ephemeral. AWS imposes a maximum run time of 900 seconds on all Lambda function invocations. This means, anything that takes longer than 15 minutes to complete as a part of its normal operations will need to utilize another service. So, if possible, avoid long-running functions.
Beware of Data Transfer Costs
With AWS Lambda, you’re charged both for the computational resources used and the data transferred by your application between various AWS regions. It’s important to make sure that the functions driving your application are co-located with the data that they depend upon as much as possible. Otherwise, the cost of your serverless application is likely to grow more rapidly than expected.
State Does Not Always Persist
Unlike a traditional web application, which has a machine that persists between each request, the containers that run your serverless functions are destroyed and recreated frequently, often between every call. As such, your application should not rely on machine state in any way that requires the container to persist between calls, as this persistence is not guaranteed. Each request should be atomic and idempotent.
Observability and Debuggability Reduced
One of the side effects of spreading the functionality of your application across a set of disparate functions is that tracking the interactions between these functions becomes more complex than it would be in a traditional web architecture. When something in the sequence fails, nailing down a root cause can be difficult due to the problematic machine no longer existing, which in turn makes troubleshooting and observability more challenging.
Preparing for the Migration
The first step of a successful migration is taking full stock of what needs to be migrated. Below, we’ll look at a few different areas that you’ll want to concentrate on as you prepare for your migration, along with some suggestions for how these might be represented in a serverless context.
Mapping Data Usage
One of the first steps in migrating your microservice application will be to understand how the data in your application flows. You’ll want to know the code manipulating each object in your database so that you can better design the resultant serverless API that will drive your application’s data usage. For a well-built microservice application, the services themselves are an excellent starting point for this mapping. By analyzing your application’s database usage, you can quickly identify inefficiencies in how the data is passed around as well as expose any legacy issues with how the data is constructed and maintained.
Mapping Service Interactions
Once you’ve developed a clear picture of the flow of data in your application, you’ll want to map all of the interactions between each of the microservices in your application. What are the interdependencies in your services, and how will these change once each individual endpoint in your application is its own distinct object? This will be a crucial part of designing the artifacts that play a role in the day-to-day operation of your application, and these interactions will often need to be modeled as serverless functions of their own.
Determining Entry Points
Once you’ve outlined the flow of data and service interactions, you’ll want to take a close look at the entry points for your application. These are the crucial functions of your application, such as the API calls used by your application’s frontend to populate the views with which your users interact. These endpoints, by their very nature, will need increased security and validation, as they’re going to be the most obvious elements of your application and will, by necessity, be available via methods that are easily discernible to anyone with access to a development tools pane in their browser. You’ll also want to use these entry points to organize your observability solution, likely using each entry point to define a transaction ID that represents the flow of a single request through your serverless application.
Planning Data Migrations
Once you’ve broken down your application into its constituent behaviors and side effects, you’ll need to evaluate your data storage needs as well. Building on top of AWS Lambda will likely give you benefits right out of the gate, as your serverless functions will be able to interact with your data in exactly the same way as the current microservice architecture. However, it’s important to remember that Lambda invocations incur charges based on data transfer as well as compute usage. As such, you’ll want to be sure that your functions are located in the same availability zone as your application’s data resources.
Performing the Migration
Once you’ve mapped all the actions and side effects of your application, migrating your application becomes a fairly straightforward process. The key to success in migrating to a serverless platform is an iterative approach. Because serverless functions are cloud-based, you can easily call them from within the very application that you are migrating to a serverless platform. From this perspective, the migration then becomes a pattern of continually gating functionality behind new serverless functions until eventually you are able to replace all server calls with calls to your application’s entry points. After this, your application will live entirely in AWS Lambda.
To start, you’ll want to break each microservice down into a set of high and low-level functions. The granularity of these functions will heavily depend on how quickly you want to move and will be complicated if there is complex communication and coupling with other services that need to be accounted for.
Once you’ve settled on a breakdown that makes sense for the complexity level of your application, you will want to start with the functionality closest to your data and move outward. This will naturally give rise to an approach that gradually increases the complexity of your application as you are able to successfully migrate resources into AWS Lambda. At the same time, you’ll also need to ensure that the most critical aspects of your business functionality are the most well-tested by moving them into a serverless architecture first.
After migrating your application into a functional serverless architecture, you’ll next want to start optimizing communications. Depending on your application, you can rely upon the fidelity of triggers provided by AWS to invoke your application’s functionality, moving away from an HTTP-driven model. You’ll need to handle resource serialization as well, particularly given that what was once represented by a series of code-level interactions is now going to be modeled via a set of messages that need to be marshaled and transformed appropriately at the boundaries of your serverless functions.
After solving the mechanics of migrating your microservice code to AWS Lambda, you’ll next need to tackle performance optimizations. These will closely resemble the types of optimizations you’d add to a microservice application, as the concepts are inherently similar. You’ll want to make use of third-party caching, for example, to reduce the amount of unnecessary work performed by your application. However, applying these concepts to a serverless application introduces additional complication, as, by their very nature, serverless functions can’t rely on machine caching to provide performance improvements. Tackling this last element will likely represent the bulk of the effort of your development team once the initial migration has been completed.
Improving Monitoring and Resiliency
Once the migration is completed, you should immediately begin to see the value of your migration–particularly once your next AWS bill comes in and reflects the true usage that your application exhibits during its regular operation. By moving from always-available architecture to an on-demand model, you’ve likely reduced costs and added scalability. But you’ve also introduced a few issues that need to be tackled.
Given that your application is now spread across several functional units, errors reported will be ephemeral and potentially more complicated to track down. While you can build your own solution to this issue using AWS resources like CloudWatch, this effort is greatly eased by third-party products such as Thundra that work to take all the headaches out of serverless observability.
Thundra allows you to see the activity of your serverless architecture in a flexible way, with custom views of invocations, functions, traces, and any operation in serverless architecture. Thundra provides quick insights into the operational statistics of your application and generates actionable alerts that help you understand the impact of an issue when your serverless app faces a timeout or out-of-memory error. It also builds and tracks an invocation record while identifying potential issues with your functions that can impact your application’s stability.
Microservice-based applications with low usage rates can often benefit from migrating to a serverless-equivalent architecture. This will often save on costs by switching from billing based on application availability to billing based on resource usage.
The first step in performing a migration to a serverless architecture is to understand how the data flows and is manipulated by your application’s services, then mapping those to individual serverless functions that take responsibility for completing the actions that drive your business. Once that mapping has been completed, the mechanical work of translating your code from individual services into suites of serverless functions should take place in an iterative approach, allowing you to continually test and expand your code base as you migrate.
After you’ve finished the migration, you’ll want to start addressing the classic problems of a serverless architecture. Migrating microservices still carries the same old problems of observability. Luckily, Thundra provides SaaS and on-premises solutions to tackle this problem.