5 minutes read

POSTED Nov, 2019 dot IN Microservices

Rethinking Serverless Architectures With Eventbridge

Sarjeel Yusuf

Written by Sarjeel Yusuf

Product Manager @Atlassian

rethinking-serverless-arch-eventbridge X

As the march of technology is never-ending, the only constant we can expect is change. This is especially true considering the strides that serverless has made in the industry, especially with the release of the AWS Lambda back in 2014. Upon its release, AWS Lambda was quick to take front and center position in the FaaS services making up the core of serverless applications. Hence it was rightly heralded as one of the most important releases within the domain. This further lead to an array of best practices dictating how applications were built using FaaS services achieving serverless capabilities. 

However, as the course of technology meanders in its ongoing path, new innovations are constantly redefining the way we build applications. One such innovative service announced this year was AWS EventBridge and its release has since caused an uproar in the domain of serverless. Many blogs and posts within the community that followed the announcement characterized it as the most important announcement after the release of AWS Lambda. 

Therefore, it is safe to say that EventBridge will be quickly incorporated into how serverless applications are built. In fact, the usage of EventBridge is further accelerated by the fact that there are integrated partners involved, and communicating within the requirements of the microservices architecture improves with EventBridge. However, this also means that we must reconsider the conventional building of serverless applications, and have some form of hindsight into the limitations of EventBridge

Where We Are and How We Got Here

Before we can think about tearing down conventions, we must first become familiar with these conventions. If we are to leave for better pastures, let us first acknowledge what pastures we currently stand on. 

With the rise of the tech industry, we saw software usually shipped as single clumps of large packages that resided within on-prem machines. This form of building software is famously termed as monolithic patterns in an era before cloud computing. The architecture could involve initial separation of modules and codebases, but in the end, these separate modules were tightly coupled which meant deploying the entire application or nothing. 

This meant that new developments in the application would involve changes that would inevitably involve the entire application and thus any problem resulting from changes in a single module would result in the role-back of the entire application. As a result, deploying new functionality would require arduous and comprehensive planning to ensure overall success. 

Despite countless benefits, contenders to monolith design soon sprung up. The deviation away from monolith was manifest in the rise of microservice architectures, and this was facilitated by the advent of cloud services. 

The road from monolith to microservices was not a direct one and instead went through Service Oriented Architecture (SOA). One of the most popular practices that came with SOA was the 3-tier pattern which involved a clear separation between the front-end, business logic, and data layer. 

Microservices that built on top of the 3-tier pattern still preserved most of the practices of SOA. The new architectural patterns began to visualize the entire application as separate entities interacting with each other but preserving little or no coupling. We reached here because the previous options required much engineering effort in managing servers, scaling and networking. 

Considering the properties of serverless, we can already see why it builds on top of microservices architecture as the default method for building serverless applications. Thinking of the entire application as a separate decoupled components meant that each service could be built on a separate FaaS function or group of FaaS functions. Benefitting from the scalability properties of serverless FaaS functions, each component could auto-scale to meet the exact requirements of that part of the application, as compared to having the entire application scale. 

Moreover, every single component could be maintained and deployed separately with little or no effect on the entire application, allowing for better agility and production. 

Following microservices architecture for serverless meant that large complex applications could be broken down into smaller and simpler functions that separate teams could develop and own without having to worry much about the overall effect on the system. This is of course if everyone works in tandem regarding the communication between these individual functions. 


Greener Pastures Not so Green

After a long trek of innovation, the world finally concurred that microservices architecture was the way to go when building serverless applications. However, this did come with its own set of problems.

For example, if we think of a 3 tier application built for animal conversation in the South Luangwa National Park of Zambia, we could visualize something as below:


If we were to convert the above application to a serverless application, we could think of building the various functionality that makes up the application on different separated FaaS functions. This can be visualized below: 


One of the most evident changes experienced is the need for communication between the various components, and the fact that these components are usually triggered by receiving events. This may result in the component receiving an event to then produce and event which would then be consumed by another component. Hence we have Event-Driven Architecture

Irrespective of the countless benefits that have been achieved, there are also some drawbacks to where we are now considering serverless architecture. One of the major issues is the communication between these components. Now we have to think about web-hooks and APIs. This leads to an increase in the engineering efforts as it could mean lead to various issues such as the following:

  • Additional Infrastructure in the form of API Gateways. 
  • API endpoints should be scalable considering the initial scalable motivations for going serverless.
  • Uptime and other concerning management related responsibilities as additional communication channels increases the risk of failure in sending and receiving requests.
  • Marshaling/Demarshalling overheads. 
  • Delivery gets complicated when scaling the number of services to deliver to. 
  • We may be restricted in what we can achieve depending on the available endpoints, and hence new forms of events required new endpoints with new JSON structured objects and request/response protocols for example. 
  • The need for distributed monitoring and hence the overhead of configuring monitoring tools accordingly.
  • Scalable communication channels

Where Are We Going

Considering where the industry is at the moment with microservices architectures and the current problems that are incurred, what we see is EventBridge acting as a solution to further the benefits of serverless technologies. 

Considering that EventBridge is an event-driven bus, that is fully scalable, fully managed, and allows communication with third-party SaaS tools, applications in how we build on serverless already become apparent. Basically it is a serverless event-bus. 

So if we now consider the problem with the microservices architecture, EventBridge provides a fully managed and scalable communications channel between our components. The scalability property of the novel service solves all the scalability related issues that we currently may experience. This is especially important as one of the major reasons for moving to a serverless platform is the auto-scalability provided. Hence we may have achieved auto-scalability with our FaaS services, but the scalability of the communication channels was still an unclear situation.

Now, however, we can use Eventbridge in between our components and achieve scalable communication channels, and also leverage its rule-based routing capabilities to act as somewhat of an API gateway in some regards. For example, routing events coming from a third party SaaS integrated partner to different AWS Lambda functions. 


There are alternatives to EventBridge when thinking of communication channels. Nevertheless, these services do not have all the features that AWS EventBridge offers. For example, SNS demonstrates high reliability and scalability but lacks the event-based filtering on context and content. Additionally, it does not ensure order. Kinesis streams, on the other hand, do guarantee order, but cannot scale automatically. Furthermore, CloudWatch Events, which is usually compared to EventBridge,  lacks the ability to allow communication with third party SaaS tools used in serverless applications.


EventBridge also allows for event resiliency and promote asynchronous behavior. This thus allows us to tackle problems related to downstream failures and assures that the system behaves as expected. James Beswick talks about how EventBridge can be leveraged to ease the pains in development and operations when building serverless applications. There are truly endless possibilities in how EventBridge can be leveraged.

Therefore, in our long and neverending march towards technological progress, we have reached a lay-by with EventBridge. We should take this opportunity to reflect upon the changes that AWS EventBridge brings to the way we build serverless applications. There are limitations to using AWS EventBridge but these limitations are ones that can be worked around or improved overtime by AWS.

One Last Word

You can integrate Amazon’s EventBridge Service with your Thundra APM account to receive events from Thunda alerts. See the steps in this documentation you will need to follow in order to send Thundra events to EventBridge.

Thundra offers end-to-end observability for the full software development life cycle. From development to production, Thundra’s products help you understand your application behavior and troubleshoot issues before they become problems or affect your end-users. If you still haven’t taken a step into Thundra’s world, you can start your journey here.

If you have any questions, you can directly contact our engineering team through our community slack, support@thundra.io, or by sending us a message through our contact us page.