x

[eBook Presented by AWS & Thundra] Mastering Observability on the Cloud 📖 Download:

The Serverless Path to DevOps

Feb 27, 2020

 

serverless-path-to-devops

As the world of software expands, new development practices are constantly being thought of to ensure the speed and reliability of development. Increasing competition and rising customer expectations and reliance on software have set the goal for thrilling speeds of development while maintaining reassuring reliability against breakages. This is where the motive of DevOps arises as it is a cultural practice aimed at providing the exact need for increased velocity with maintained reliability and uptime. To go fast without crashing.

However, adopting DevOps is easier said than done. The idea is spectacular in the literature and coming from the mouths of industry luminaries that ornate tech-conferences. In reality though, implementing DevOps can entail learning curves to overcome, changes in trusted practices and implementation of infrastructure for which companies and teams simply do not have engineering resources for at a specific time. 

Nevertheless, the surrounding technical ecosystem is improving and that means that there are new solutions to tackle the DevOps problem. Serverless is one such technical and conceptual leap that has opened up several possibilities from improved IoT devices to cost-effective machine learning applications

Similarly, serverless provides the much-needed respite in DevOps adoption. That is the purpose of this piece. To highlight some of the ways that serverless technologies can be utilized to achieve ideal DevOps practices by provisioning facilitating architecture.

The True Meaning of Serverless

To understand how serverless can aid in the adoption of DevOps, it is crucial to break the misconceptions of the technology and understand the pillars that make something truly serverless. The onus lies on the success of AWS Lambda’s popularity making it synonymous with the concept of serverless. 

Don’t get me wrong, I am not belittling AWS Lambda. In fact, the strides made by the serverless community has been greatly supported by gaining popularity of AWS Lambda. However, some new entries into the domain have associated serverless only to be FaaS and this could not be further from the truth.

Contrary to the misconceived belief that serverless is a form of compute service, any service is a serverless service if it demonstrates the characteristics listed below:

  • Fully managed
  • Scalable
  • Pay-as-you-go

Considering these properties solely, and taking the AWS ecosystem as an example, we see that there are actually many AWS services that qualify for the serverless tag. For example in the domain of compute services, AWS Lambda is no longer the only service that makes the cut but rather we now see AWS Fargate too. Similarly, integration services such as AWS Step Functions and Amazon API Gateway are also included along with the database services such as Amazon Aurora and Amazon DynamoDB

As a result of tackling the misconceptions, we have inadvertently expanded the classification of serverless services. This better-perceived bag of services delineates that the already popular tools being used in various DevOps stacks across the industry are actually serverless.

The question then becomes, why is serverless so beneficial for DevOps?

The Serverless Appeal

What makes serverless tools so attractive, to the point where it is adopted without the users even considering the tool as a serverless tool, is the inherent benefits that come with the characteristics listed above.

If DevOps is adopted it is for the purpose to accelerate development while stabilizing the product away from downtimes. The resulting goal that is hoped to be achieved is a more competitive edge and through better customer experience and faster maturity of the product in terms of features and capabilities. Which further entails endows sustainability of the software product in the industry.

So these are the motives of DevOps, which by no doubt are the shared motives of the entire industry. No team or company stays away from DevOps because they do not want to be more competitive and ensure sustainability. The reason why there may be a reluctance to adopt DevOps, albeit the known benefits, is because it may be too expensive for the adopters. This expense may be incurred in the form of time, technical resources or simply the efforts spent in overcoming learning curves to procure the skills to move towards DevOps.

Considering these barriers, the reasons for serverless tools in the DevOps stack becomes apparent. Implementation of any DevOps solution needs to ensure low costs for greater returns. Tipping the cost-benefit scale in the way for DevOps adoption. 

Therefore, with serverless technologies, the greatest advantage is the pay-as-you-go model. You end up paying for only those resources you would use. In terms of AWS Lambda, you would only pay based on the number and duration of invocations. Hence potentially lower costs.

There are cases where FaaS functions could become more expensive than containers, and that depends on the traffic experience. The higher and more consistent the traffic, the higher the costs of serverless tools, and these costs can rise higher than container costs. However, in the case of FaaS for DevOps, the traffic would coincide with the speed of development. This intuitively would be lower than the required traffic to result in FaaS functions becoming more expensive than containers.

1

From the State of DevOps 2019 report backed by Google, the higher tier teams or companies can achieve thousands of deployments per day. As per the price comparison charts above at the time of writing this piece, it is known that to reach the same cost of an EC2 m4 instance, 160K invocations of around half a minute would be required when provided with 128MB in memory. By allocating 3008MB, you can achieve around 300 invocations. 

2


Now considering the number of invocations that would be required if AWS Lambda functions are used in the continuous delivery infrastructure, we can see that the costs would potentially be cheaper than the use of containers.

Further benefits of serverless technology are the fact that they are auto-scalable and fully managed. Therefore we no longer need to spend countless hours maintaining our DevOps architecture and instead focus on the business logic that the DevOps infrastructure was actually built for. 

Use Cases

With the benefits of serverless for DevOps understood let us look at some of the ways serverless technologies can be used for DevOps. The list of three use-cases below is by no way an exhaustive list. The various way serverless technologies can be used is simply limitless. The use cases listed below are simply some of the popular ways in which you can incorporate serverless tools into your DevOps stack. 

Availability and Performance Monitoring

One of the popular use cases is to orchestrate FaaS functions to impersonate user traffic to your services in production. FaaS services such as AWS Lambda or Azure functions are a good fit because they are comparatively easy to spin up, auto-scalable and cheap for the particular use case. 

The process entails setting up AWS Lambda functions or Azure functions to make API calls to your services, similar to those API calls that would be made by the user via the front end interface. These periodic checks would account for the continuous availability of your services. Any failures incurred in your production environment can then be captured by monitoring tools such as Thundra or AWS Cloudwatch informing you of any failures or performance degradations.

The periodic requests could be set up using the cron job capabilities of vendors providing FaaS services. For example in AWS Lambda, the function can send periodic requests to your APIs by scheduled trigger events via Amazon CloudWatch events. 

Furthermore, the alerts from all your monitoring tools can then be consolidated by incident management SaaS tools such as Opsgenie. Such incident management tools automatically alert on-call responders according to set schedules, and Opsgenie specifically has a functionality called OEC actions which can be set up to automatically remediate a failed service according to how it is configured. More about the feature can be found in Opsgenie’s documentation.

Moreover, third-party monitoring and incident management tools used, if integrated partners with AWS EventBridge, can send events to your AWS console. AWS EventBridge is practically a serverless event bus which in this specific scenario can be used to route incoming events from your third-party monitoring tools to your AWS services for logging and resolution purposes.

3Overall it is seen that FaaS functions can be used for automatic availability and performance checks whereas serverless event buses can communicate alerts due to the invocations of these FaaS functions, throughout your DevOps infrastructure. This functionality is all achieved comparatively easily considering that most of the management is delegated to the vendor with serverless technologies. Additionally, costs are comparatively cheap thanks to the pay-as-you-go model and the auto scalability of FaaS functions allows you to tweak the loads you would like to test against your services.

SlackBots Galore

SlackBots are to DevOps adopters what shepherd dogs are to sheepherders. SlackBots, or more generally speaking chatbots are improving the DevOps process in terms of CI/CD. This has introduced the concept of ChatOps.

Conceived at Github, the term ChatOps refers to conversation-driven development where typed commands in a chat tool kick off CI/CD process through custom scripts and plugins. The operation of these scripts requires backend support, and this is where serverless tools come into play.

With FaaS functions, DevOps engineers simply have to write the script to perform the intended operations and upload it to the function while ensuring the chat tool can invoke it. That means no arduous container orchestration and networking setup which would simply amplify the hardships of development rather than ease the process as DevOps intends. Furthermore, costs are not incurred on an hourly basis and instead only when the FaaS function is invoked via the chatbot.

Serverless Pipelines for Continuous Deployment

Similar to ChatOps, serverless can be used to enhance the CI/CD process, but unlike ChatOps we can even go one step further and automate the entire process from merging pull requests to deploying in production. Introducing GitOps!

The term GitOps was coined by Weaveworks and is a methodology for Kubernetes cluster management and application delivery. The basic idea is to leverage Kubernetes’ convergence properties to perform continuous delivery triggered by git push. 

GitOps is a way to do Kubernetes cluster management and application delivery.  It works by using Git as a single source of truth for declarative infrastructure and applications. With Git at the center of your delivery pipelines, developers can make pull requests to accelerate and simplify application deployments and operations tasks to Kubernetes.

In applying GitOps there is a ‘source of truth’ for both the infrastructure and application code further increases the velocity of development teams. The workflow that makes this possible is as below:

  1. CI tools such as Bitbucket pipelines push docker images to hosting tools such as Quay.
  2. Cloud functions copy the configs and helm charts from the master storage bucket to master git repo.
  3. GitOps operators such as Weaveworks Flux then updates the cluster according to the config charts and pulled helm charts by the Lambda function.

It is in the copying of these helm charts to the master git repo where FaaS functions can be used. For reasons similar to those mentioned for ChatOps, FaaS functions are usually easy to set up and cost-effective. That means DevOps engineers can focus on other parts of the GitOps infrastructure and achieve low costs in the process. 

4

Concluding Our Journey

The road to DevOps is hard! But through our journey, we see that serverless tools are our trusty traveling companions. 

Serverless shows much respite to the pains of adopting DevOps through its inherent characteristics of pay-as-you-go, auto-scalability, and fully managed services. All these properties ensure that implementing DevOps infrastructures becomes much easier and cost-effective. Therefore delivering the optimum benefits of DevOps with the least costs incurred due to set-up. 

Illustrated above are just three out of the many examples of how serverless technologies can be used in your DevOps stack. All the way from development and testing to CI/CD and incident management, be sure to find a serverless service somewhere, operating at optimal costs yet high effectivity to ensure you never have to worry about your development velocity or code reliability ever again. If happiness is the journey to life, as said in various ways by various great personalities, then let serverless be the journey to DevOps. Hence making serverless synonymous with happiness and life synonymous with DevOps.