3 minutes read

POSTED Mar, 2023 dot IN Observability

Service Mesh in Kubernetes: Use Cases and Monitoring

Oguzhan Ozdemir

Written by Oguzhan Ozdemir


Solutions Engineer @Thundra

linkedin-share
 X

Cloud-native applications are designed as small, distributed microservices. The microservice applications are deployed as containers to ensure scalability, reliability, and portability. Kubernetes has become the de facto container management platform for the easy creation and management of large clusters.

The most important aspect of microservices is their reliance on other microservices within the cluster. Each microservice focuses on executing a single critical task and also connects to other services. This distributed, the interconnected architecture makes network traffic a vital part of the clusters.

Service Mesh

Service mesh tools manage the network traffic between services. They create a scalable, secure, and reliable connection between distributed services. In Kubernetes, service mesh installations work with Kubernetes-native resources to define and manage the networking of the applications. The significant advantage of service mesh tools is their separation of business logic from network connections, observability, and security:

  • Network connection:strong> Service mesh enables the discovery of connections between applications in the cluster. In addition, it facilitates deployment strategies such as rolling updates and blue/green or canary deployments.
  • Observability: Most service mesh tools come with a network monitoring integration to discover and visualize traffic, latency, and tracing.
  • Security Service mesh can enforce network policies to isolate services and limit unwanted access to critical services in the cluster.

Use Cases

Service mesh tools are increasingly popular for connecting microservices and operating the traffic between them. Below, we discuss common use cases for these tools.

API Gateway

API gateway is a design pattern to manage multiple APIs from a single front door. A single front door enables you to authenticate, authorize, direct, rate-limit, and load balance incoming requests. In Kubernetes clusters, you can configure service mesh tools to work as an API gateway for internal and external communication. Kubernetes networking is also evolving its API to keep up with the latest trends.

Observability

The most challenging part of distributed applications is observability. It is a difficult task, as dozenssometimes even hundredsof services connect to each other and run on different cloud infrastructure hosts. Service mesh tools can be used to monitor and measure the traffic and network load to create visibility across your stack.

Deployment Strategies

Cloud-native applications are created, deployed, and updated more frequently than other software methodologies. Therefore, deployment strategies such as blue/green or canary have become popular to streamline deployments and updates. While Kubernetes offers simple deployment strategies, they are limited in terms of flexibility. With service mesh, however, you can design complex deployment strategies that can be used in Kubernetes.

The Top 3 Service Mesh Solutions

Service mesh products are the latest trend in the cloud market, with a number of new startups in the field having cropped up recently. Below, we’ll discuss three of the most popular service mesh tools available.

Istio

Istio, the Kubernetes-native service mesh platform, is the most popular service mesh solution. Cloud providers such as Google Cloud, IBM, and Microsoft Azure offer Istio as the default service mesh solution in Kubernetes clusters. Istio covers all the most common service mesh use cases, including request routing, circuit breaking, latency tracking, and error reporting.

Linkerd

Linkerd is a flexible service mesh platform from the Cloud Native Computing Foundation (CNCF). It is the second most popular service mesh tool on the market and is known for its integration with ingress providers like Nginx and Trafik as well as with monitoring tools such as Prometheus and Grafana.

Consul Connect (Consul Service Mesh)

Consul Connect is becoming an increasingly popular service mesh solution, thanks to its hybrid architecture and reduced complexity. It offers numerous benefits, especially if you use other HashiCorp tools like Consul or Vault in your stack that have similar APIs and setups.

Although all service mesh solutions focus on similar use cases and aim to cater to the latest market trends, there are differences between the tools, as summarized in Table 1 below.

 

Ingress controllers

Istio ingress

Any

Envoy

Traffic management

Circuit breaking

Rate limiting

None

Circuit breaking

Rate limiting

Deployment strategies

Blue/green deployment

Blue/green deployments

None

Operational complexity

High

Low

Medium

Differences between the top service mesh solutions

Monitoring in Service Mesh in Kubernetes

There is lots of talk around observability for Kubernetes applications, which is a broad topic. There is no official way to monitor Kubernetes-native applications, as their architecture and business requirements are very different from one another. In addition, because Kubernetes distributes containers to various nodes in the cluster, achieving global observability is a complex task.

There are a number of monitoring best practices for distributed applications that are considered golden metrics. According to the popular Google SRE book, the golden metrics are:

  • Latency: Shows how fast the services are responding.
  • Traffic: Shows the demand for the service in terms of incoming requests.
  • Errors: Indicate the number of failed requests.
  • Saturation: Shows the current load of the application in terms of resource usage.

Observing the golden metrics in your service mesh tools helps to aggregate your data into a single place, giving you visibility of the overall status of your infrastructure, cluster, and applications.

Conclusion

Kubernetes facilitates the deployment of scalable applications to the cloud, while service mesh ensures these applications can connect to each other. But monitoring and checking the status of your applications, service mesh, Kubernetes, and infrastructure can be challenging.

Observability of your cloud-native modern stack requires a robust, scalable, and flexible cloud-native monitoring system. Thundra APM offers true end-to-end management of your distributed applications on cloud infrastructure. This empowers developers, SRE teams, and managers by allowing for easy integration into your Kubernetes clusters and seamless monitoring of containers running on virtual machines.

Experience modern observability from development to production. Sign up to Thundra APM for serverless and containers now.