Serverless is great, it helps companies focus on product and application development without having to worry about the application’s infrastructure and scaling under various conditions.
However, there are some soft and hard limits for every AWS service you should need to keep in mind when developing a serverless application. These limits are meant to protect the customer as well as the provider against any unintentional use. As well as provide guardrails to ensure you are following best practices.
In this article, we will take a closer look at AWS Lambda limits and explain how to avoid them.
Deployment Package Limits
There is a hard limit of 50MB for compressed deployment package with AWS Lambda and an uncompressed AWS Lambda hard limit of 250MB.
How do we avoid these limits?
Use an IAC (Infrastructure as Code) framework that supports advanced packaging such as include/exclude patterns.
Use Webpack, a well-known tool that will create bundles of assets (code and files). Webpack helps to reduce and optimize the packaging size by:
- Include only the code used by your function
- Optimize your npm dependencies
- Use a single file for your source code
Set a MAX deployment package size for your team
Meet with your development team and set a max package size for every AWS Lambda deploying into your account. Then add this check to each deployment. If the deployment package size is above the threshold (e.g. greater than 10mb), break the deployment.
Stop reaching for 3rd party dependencies
Use less third party packages. Too often we reach for simple dependencies when there is native libraries we can take advantage of instead. In a non-serverless world, this is fine. In a serverless world this is going to be costly.
If we can’t get away from using large dependencies or binaries like FFMPEG or ML/AI libraries like scikit-learn or ntlk with AWS Lambda, then we have to be more creative.
AWS Lambda Layers
Sometimes people will recommend AWS Lambda Layers as a solution to this problem, Unfortunately, even if we put these large binaries into an AWS Lambda Layer and attach it to our AWS Lambda function. We still can’t escape the 250mb hard limit.
Using an AWS Lambda Layer does have other benefits though, such as reducing deployment time as you will stop deploying your large binary with each code change and you are also isolating reusable dependencies which can be shared across services.
The other option we have if we really need to pack more dependencies into our AWS Lambda function is to put our large binary or dependency into an AWS S3 bucket. Then add a few lines of code to download that AWS S3 file when the function is executed.
This will lead to a long cold start, but subsequent requests will be quick. It’s not recommended as a production architecture as it’s brittle, but if you need to get around the deployment size limit you can make use of this strategy for development.
Total Size Of All Deployment Packages
In your AWS account, you will have a region wide soft limit of 75GB for all AWS Lambda functions that have been deployed. That sounds like a lot, but it really isn’t. Most people learn this the hard way when their deployments stop working.
If you hit this soft limit you can have the value increased by creating a support ticket with a service quota limit increase. However, that may just be masking the problem similar to the “Hacky Workarounds” section above.
Just because we can do something, doesn’t mean we should. Instead let’s talk about why this limit gets hit and catches teams off guard.
A lot of IAC (Infrastructure as Code) frameworks will version your AWS Lambda functions as you make additional deployments and these same frameworks don’t typically have built-in versioning clean up.
This means if we make 50 deployments over the span of a couple weeks without cleaning up our versions. We will have 50 versions of our AWS Lambda code and all the dependencies that were packaged up with that AWS Lambda function.
How do we avoid this limit?
We can avoid this limit by being cognisant that the limit even exists and taking steps to make sure we don’t keep every old version of our AWS Lambda function.
Versioning is an important concept and we are not recommending that you don’t version at all, but rather recommending that you keep the number of versions for each AWS Lambda function down to less than three.
Storing Data Inside the AWS Lambda Instance
In the previous section “Hacky Workarounds” we talked about pulling in a file from AWS S3 which is too large to be included in our AWS Lambda deployment package.
In this section we are talking about the limit on AWS Lambda when it comes to storing things on the underlying instance itself. When people first begin working with AWS Lambda they often get hung up when their code won’t write to the filesystem.
This is a common concern because the very same code that worked previously either locally on their laptop or on a virtual machine now throws weird errors on AWS Lambda.
To resolve this issue, developers will need to write any files to the /tmp directory which is dedicated for data storage on AWS Lambda with a hard limit of 512 MB.
An important note, the /tmp directory as you may have surmised is only for temporary storage. Once the underlying instance supporting your AWS Lambda function dies off so will your /tmp storage.
This is commonly referred to as stateless computing as your state or data is kept off the server itself. Typically, we will keep this data in other fully managed AWS resources such as Amazon DynamoDB, AWS S3, and so on.
Then during the next invocation, our AWS Lambda function can pull in our needed state or data from these external resources and we should be all set.
How do we avoid this limit?
Most of the time developers familiar with AWS Lambda will not hit this limit because it’s often thought of as an anti-pattern.
As we talked about earlier, just because you can doesn’t mean you should.
If you do need to manipulate files you can use the Nodejs Stream method to read, process, and write files without loading the whole file into our AWS Lambda file system.
This method works really well when manipulating a file and storing it on AWS S3.
Lambda Payload Limit
There is a hard limit of 6mb when it comes to AWS Lambda payload size. This means we cannot send more than 6mb of data to AWS Lambda in a single request.
Developers will typically run into this limit if their application was using AWS Lambda as the middle man between their client and their AWS S3 asset storage.
For example, if our application needs to upload an image or video, we will easily run over this limit when we send the image to our AWS Lambda function for upload.
How do we avoid this limit?
However, we can get around this limit by using an AWS S3 pre-signed url for uploads. This allows our AWS Lambda function to instead generate a pre-signed url and return it to our client, then our client can use that temporary pre-signed url to upload directly to our AWS S3 bucket.
Pre-signed urls are a great solution to get past this limit and make our applications more efficient overall.
In this article we talked about a few of the common AWS Lambda limits which we are likely to encounter when building serverless applications.
Remember to frequently review and optimize the number of packages your AWS Lambda functions require as this will help reduce cold starts, decrease deployment time, and lower your AWS Lambda cost.
Don’t fight against how AWS Lambda was built, all though it’s fun to build some of these “Hacky Workarounds” they are very brittle.
Finally, the limits we discussed may not be an issue for you at first, but as time goes on these limits can be problematic and will slowly creep up on you until you’re swimming in technical debt.