Traditional testing approaches that worked in monolithic applications have proven ineffective in an agile world where changes are dynamic. In an environment with different distributed services, there are even more reasons these methods won’t work.
Contemporary applications are often composed of a dozen different services that might not be available to test in the required form at the same time in the same environment. These components could be cloud resources, services from an external provider, or a service provided by an internal team. You may also have buggy dependencies, experience outages, or suffer other intermittent problems that would cause even the most well-written code to fail during tests.
Furthermore, independent services have to be tested individually before being integrated into the larger application, increasing the complexity and difficulty of testing things the "old-fashioned way."
These factors—and many more—make traditional testing approaches less effective in the microservices world. In this article, we’ll equip you with best practices for testing microservices and how to level up your debugging by adding observability to the mix.
Methods for Testing Microservices
Different strategies for testing microservices need to be employed at different microservices layers. Below you’ll find some of the most common testing practices organizations can use for microservices testing, and when to use them.
Although unit testing is often overlooked when testing microservices, it is critical to any testing strategy. It will help you figure out if your software and its components are working as intended.
Unit testing involves testing the smallest function or unit of an application (preferably in isolation) to validate whether each unit works as expected given a set of known inputs. Unit tests can also help you determine the reliability and stability of functions and classes of a microservices application.
Component testing allows you to independently validate and gauge how each component of a microservice application performs, without integrating other services.
Typically, it’s easier and quicker to execute component tests than to evaluate all of the microservices combined. Because it’s challenging and slow to test microservices as a whole, you need to mock the other microservices or dependencies. You can isolate dependencies by replacing them with test doubles or mock servers.
Component tests allow for easy error detection, especially when the tests are executed in a controlled testing environment. They also open a broader array of opportunities for testing individual microservices in isolation.
Integration tests verify the interactions and communication path between the components of a microservice. This eliminates interface-related defects and ensures that various system components interact seamlessly. These tests uncover interaction features and offer different experimentation options, such as top-down, bottom-up, or sandwich testing approaches.
Because microservices predominantly rely on communication over networks (asynchronously or synchronously) rather than in-process calls, integration testing is needed to examine and fix issues that arise from communication.
Contract testing helps ensure that two separate systems (such as a service provider and consumer, or two microservices) are compatible and can communicate. A contract test captures the interactions between each service, stores them in a contract, and verifies that both parties adhere to it.
For a contract test to be successful, the calls and responses must consistently return the same results, even if the service is upgraded or altered.
Contract tests are repeatable, easy to maintain, and easy to scale. They make microservices customer-driven, and allow engineering teams to easily identify which changes will impact customers.
The primary purpose of end-to-end (E2E) tests is to verify that a distributed application works as expected from beginning to end, that there are no high-level disagreements between the microservices, and that the system configuration is valid.
These tests are very powerful, but they’re also hard to perform or maintain. It's healthy to reduce the number of E2E tests you execute in each application and shift your reliance to lower-level testing strategies (such as unit and integration tests) for identifying breaking changes.
Overall, a well-planned microservices testing strategy should include a larger number of unit tests, a smaller number of integration and components tests, and few end-to-end tests.
The Challenges in Testing Microservices
Microservice architectures introduce new fault domains and concerns that engineering teams need to address. This is especially true when testing individual services in isolation and ensuring that the entire architecture integrates appropriately. In this section, we’ll look at some challenges associated with testing microservices.
Lack of Monitoring Tools for Test Environments
Each component in a microservice architecture has transitive and direct dependencies that increase with every feature addition. The complexity of a microservice architecture also increases as the number of components gets larger.
The same is true for tests. More features mean more tests. Managing all of these interdependent components is difficult and sometimes impractical. In the production environment, there are APM and observability tools that help troubleshoot issues, while test environments have little or no visibility, and questions about failed tests are rarely answered. Without visibility, most developers are accustomed to simply re-running the pipeline and hoping it passes.
To write cost- and time-efficient test cases, one needs a thorough understanding of a microservice application, its components, and comprehensive visibility over each element in isolation.
Decentralized Data Management
Unlike monolithic applications that interact with a single database, every component in a microservices app helps to establish a single business functionality, has a unique database, and contains domain data relevant to the service alone.
The decentralized data ownership makes it difficult for engineering teams to manage test data, especially when the microservices don't use related databases. The challenge of managing data in distributed architectures is the source of many intermittent errors while testing microservices.
In a local environment, data inconsistencies can be debugged. In production environments, APMs and observability tools are saviors. But test environments are different. They’re transient, being spun up and then destroyed after the tests are run. Unlike the production environment, engineers rarely have the tools required to troubleshoot data inconsistency problems quickly.
Transactions Challenges in Microservices
Transactions are easy in monolithic architecture because they’re local to one service. In a microservice architecture, transactions are distributed into multiple services that may be called in sequence. If a failure occurs in any of the transaction steps, rolling back to the previous state is a lot harder. That’s because a transaction in a microservice can consist of multiple local transactions handled by multiple microservices in isolation.
In an e-commerce application, for example, a single order transaction may create an order for the user, reserve stock, and bill the customer in any order. If the billing fails, all previous steps should roll back to the previous state before the order. But as we noted above, rolling back states in microservices is not easy. Maintaining transaction atomicity adds its own layer of complexity, which is hard to simulate in CI environments where automated tests are run.
Complex Testing Infrastructure
Each microservice may be developed using a different programming language and framework, resulting in individual microservices having separate runtimes and creating a complex testing infrastructure. As the complexity of a microservice increases, some previously fast tests become inherently slow, leading to longer build times.
Because development teams have goals, little attention is paid to these slow tests as long as the build eventually turns green. But the drop in productivity will be noticed by management, and managers are tasked with reducing build time. Without good visibility into the testing environment optimizing build time could take days to figure out.
Going Beyond Automated Tests
We've discussed different strategies for testing microservices, such as unit testing, integration testing, contract testing, component testing, and more. While these techniques help improve code quality, they won't tell you why your tests failed.
In microservice architectures, looking at test logs isn’t enough to understand why a test failed. You need proper observability to understand the root cause of the issues.
Traditionally, developers troubleshoot failed tests by re-running them in their local environments. This is the primary rationale for the long-standing practice of making local and CI environments identical: it makes troubleshooting failed tests in a local environment easier.
Troubleshooting failed tests locally, however, remains a pain for multiple reasons:
- Some tests fail intermittently in CI. If a CI runs 20 times and only fails once, it will be time-consuming to debug locally.
- Some tests pass when run individually, but fail when run as part of a test suite. Running the whole test suite locally to debug the one failed test can be time-consuming, especially if your test suite takes longer to run.
- No matter how close a local environment is to a CI environment, it can never be the same. Most CI environments have more processing power and are generally faster than the local development environment.
If you’ve experienced these issues, you’ll appreciate the observability, traces, and metrics Thundra Foresight provides. Foresight gives developers the ability to debug test failures directly in a CI environment in the cloud, just like developers do in the production environment.
One of the advantages of microservices is that they help teams release features faster. But sooner or later—as you add more integration tests, component tests, contract tests, E2E tests, and so on—your CI build time will become the most significant barrier to your team’s agility.
Almost every growing organization has had to deal with slow CI at some point. The Shopify team had to do it, and probably you will too. But optimizing CI build times and testing microservices both require observability into your tests. Paying attention to your test environment will save you many lost hours of productivity each year, and at least as much frustration.
Despite the challenges microservices add to testing, they are worth the trouble by providing flexibility and scalability. But testing microservice infrastructure requires a different approach and strategy than traditional monolithic applications.
You can meet these challenges head-on by creating a testing strategy that includes writing, managing, and gaining visibility into your tests. This is an especially important step if you’re moving from the monolithic world. Pre-production, production, and post-production microservices testing techniques are all critical to developing high-quality software.
For complete visibility into your test runs, get started with Thundra Foresight and take the pain out of debugging failed tests.