Automated application testing brings up constant questions because so many of the best practices are, in reality, more like opinions. There are many ways to test an application, and too many questions to count: Do the tests need 100% coverage? What is the minimum coverage that would work? Is it best to start with integration tests, and then create unit tests for individual bugs?
Developers are well aware of the importance of running multiple tests on their code, and in theory, a failed test is a good thing. It shows you when something is wrong, which prevents the error from impacting users down the line.
But what should you do when a test fails? A single failing test means you need to fix something, meaning users then have to wait for other updates and new features. Here, we’ll discuss how to fix tests on CI servers, when to take failing tests seriously, and how to simplify the process of fixing them.
Small Obstacles Make Failing Tests Go Ignored
It’s the little things that cause people to give up on testing. A failed result doesn’t tell you much about what went wrong and just leads to extra coding on top of everything else that needs to be written. Those inconvenient, unimportant failed tests can be swept under the rug. It’s only when they’ve had time to become a messy pile that the problem starts to become painful.
More often, the backlog becomes a bit of a test graveyard that nobody wants to go near. There are more important new features to deliver, anyway. That’s how small tests pile up—not because of big problems late in the process, but because small tests were pushed aside from the very start. These problems start to snowball unless you have processes in place to make identifying and fixing them easy and early.
Failing Tests Need to Be Addressed
Think about two scenarios. In the first, new releases get blocked and delayed because your CI/CD pipeline deploys only when all tests are successful. In the second, the pipeline is altered to increase flexibility and deploy releases even when tests fail.
No matter what, your delivery slows down or your users start to notice the bugs and faults that inevitably make it into the final releases. Not ideal.
Tests are meant to help the software development process, and automated testing is key. If people improperly manage them, your system is just full of useless, burdensome tests that waste resources. Also, if tests are not taken seriously, the results can grind down your development team.
Improving Your Failing Test Processes
The keys to a successful failing test process are:
- Minimizing upfront work
- Debugging where the error exists
- Using the right tools for the job
Depending on how much automation is in place, fixing a failed test is a burdensome series of steps. Sometimes developers are forced to revisit older versions of their code to find the bug, replicate the CI environment, or even manually install some system parts. That is often the picture that comes to mind for a developer whose commit fails in CI.
Reduce the Upfront Work Required
Of course, developers and managers have an interest in reducing upfront work before debugging. When a commit fails, the developer should be notified quickly and given every detail possible that could help them fix the issue.
Infrastructure as Code is useful for building a development environment that enables individuals to debug quickly, as it accelerates the process by saving the time required to create an environment that’s an exact match.
Debug Tests in CI Instead of Locally
Ideally, developers should be able to debug issues right inside the CI environment, saving time and eliminating upfront research and work. With the right instrumentation, you can figure out precisely what happened on the server:
- What services were involved in the test scenario, and what code was executed?
- How long did it take for the test to run?
- Did it run into resource limits?
These traces and metrics can be just the right insights a developer needs.
A Smoother Delivery Process
Test monitoring tools usually tell developers when there has been an error but add no insights into how to resolve it. Instead, developers just receive incomplete logs that make the task of fixing tests seem even worse.
When they try to re-run the failed tests, developers often try to replicate the environment on their local machines where they have more control. This type of replication is not always possible given the complexity of modern cloud environments, and errors executed outside of the CI/CD pipeline can become confusing.
Control over tests and debugging belongs in the cloud and in the CI/CD pipeline to ensure software quality and delivery velocity are at their best.
Use the Tools Developers Need
Observability is key to helping your team understand why tests fail so bugs can be easily and quickly fixed. Foresight is perfect for CI pipeline observability, providing insights into traces, logs, and metrics before deployment.
Our observability experience, gathered from years of cloud application monitoring, has led to useful features for the CI/CD pipeline. Foresight’s record and replay feature (aka Time Travel Debugging), for example, makes it simple to revisit exactly what happened—running on the same hardware that executed your test and eliminating all of the work a developer would normally need to do before debugging.
Foresight keeps the delivery process smooth, including how to fix failed tests before they become a problem. Take your free account and explore Foresight yourself!