The march of technology is one that time and again has proved irrepressible. From committing phone numbers of parents and close friends to memory, to now having handheld devices that let you connect to the world with just the click of a few perceived buttons on a touch screen. The world has witnessed immense strides within just a single lifetime.
It is not a secret that much technological progress throughout the years is exponential. Society is pushing out more innovations now than it ever did in the past.
It goes without saying that a major fuel for this growth is in the computational and software industry that is forming the basis of various business opportunities. Hence the industry is continually revisiting its development practices to ensure optimal processes that constantly are adjusting to advancements. Some of the more recent of these advancements are cloud computing, blockchain technology, and cybersecurity.
One such developmental practice is debugging, a practice ensuring the survival of developers throughout the ages. The ability to validate implementation efforts is a pillar going forward with the innovation we embark on. Therefore, it is no surprise that debugging has also evolved through the ages, keeping with the goal of convenience and effectiveness in detecting errors.
This evolution of debugging is an intriguing part of the developer’s story. After all, the way we ideate, develop, deploy and iterate over software development is constantly being optimized, with the latest concept in development being DevOps. Therefore, this article aims to cover the keeps that debugging practices have gone through, what currently exists and what the future looks like as we enter a new era of technological advancements.
The Dark Ages
The story of our intrepid developers begins in the early days when computers were still sprouting in the world, taking the form of large mechanical machines incomprehensible by the common man. Depending on the definition of computers, this story can stretch far back as to the early ages of humanity. However, for the purpose of this article, we will consider ENIAC as the first digital computer according to the scope of our definition.
A can be expected, debugging ENIAC back then was a completely different experience compared to what we can consider debugging now. However, our saga shall skim through this era of computing and jump straight to modern computing where debugging first started to take the form to become what it is now. Logs and breakpoints.
A Jump to the Birth of Modern Software
A lot had evolved to go from mechanical beasts, similar to that of the ENIAC computer, to the elegant lines of code that we are all accustomed to now. So naturally, debugging throughout this evolutionary process also changed. Hence we will give ourselves the convenience of jumping through much of this history to the next impactful moment in debugging history, Logs, and breakpoints.
console.log Is an Old Friend
Logs allowed developers to print statements at the points of interest in their code execution, as defined. This provided the ability to follow code execution, though crudely, in complex systems. It allowed users to track preconfigured warnings, errors, and simply track the flow of execution. It finally gave developers the first insights into the state of their software systems. By surfacing these logs in local files or in remote logging servers, the developer had reached a crucial point in advancing his debugging practices to ease the task of building software systems.
The perennial benefits of logging have rightfully secured the longevity of the concept in the fast-paced domain of software. The ability to provide insights without disrupting the execution of the code is a major benefit. Moreover, logs have also transformed themselves into what is now known as tracing to work better with distributed systems and cloud-native microservices. Through tracing, developers can leverage log-base insights to debug the complex distributed systems and software captured in the frenzy of the cloud movement. Logs have definitely proved themselves the developer’s friend in this new era of software.
However, accompanying these benefits are also a set of disadvantages that become unbearing especially at scale. The first issue is that log statements can lead to an immense amount of noise, rendering them difficult to read. After all, these log statements need to be printed and hence can overflow the file to which they are being printed to. As a result debugging can become extremely difficult for even those who configured the logs.
Another pitfall is the cost of logging. As mentioned these logs have to be printed somewhere, and hence the cost of storing logs can easily outweigh the benefits of logging, especially if we are not careful of what we are printing. Davide de Paolis, technical lead at Goodgame Studios, recently analyzed the cost of AWS Cloudwatch Logs in a bid to highlight best practices to avoid run-away costs. He identified that if developers were to print every piece of log statement they find useful, they may incur an excruciatingly painful AWS bill. Therefore, Logging is critical but the developer is limited by resources and must perform the necessary balancing act.
When Developers Hit the Breaks to Take In the View
As the evolution of software development skipped along, we saw the birth of debugging tools. With these debuggers, came the famous breakpoints, allowing developers to take snapshot views of their system. Breakpoints allowed developers to now execute their codebase to a specific point and pause the execution. This paused state then allowed developers to investigate the state of their systems in a manner well presented by the IDE with its integrated debugger tool.
Debugging tools were revolutionary for software development, especially when integrating with the IDE. Now, almost no modern software is developed without the use of debugging tools. However, we have now reached a point in our timeline where even this remarkable achievement is becoming outdated for our present-day developer.
This is because software systems are now leveraging the advancements in cloud computing and distributed systems. As a result, we see gaps in debugging methods that breakpoints in debugging tools simply cannot fill. One issue is that while using breakpoints, the codebase needs to be run in debug mode. Therefore, we are not actually replicating the actual state of our systems taking into consideration multi-threading, distributed services, and dependencies on remote services in a cloud-native environment along with multi-service architecture.
There are attempts to get over these issues, but they at best make only a scratch on the surface in solving the bigger issue. One such remedy is Attach to Process which aims to attach the debugger to the process and allows developers to set breakpoints within code that was not started from a debugger. This process itself is very cumbersome and difficult to configure and maintain.
Therefore, considering the new era of technology and software, we see a movement to redefine debugging practices to better suit the new world.
New World New Debugging Strategy
As the developer now embarks on their journey into the new world, it is evident that traditional logging and breakpoint will be insufficient to protect them from the perils that lie ahead. Hence, we are already seeing the formation of new concepts such as Non-Breaking Breakpoints which aims to combine the best of logging and Breakpoints.
Non-Breaking Breakpoints encapsulated in the new age of debuggers are allowing developers to debug their code without having to endure any interruptions in their systems. That means it allows developers to perform debugging in live environments. Additionally, with the combination of trace logs, these debuggers also overcome the difficulties that arose from the rise of cloud computing and distributed systems.
These new tools are a stark contrast from what was available in the early days. Step-by-step debugging in a remote environment was simply a dream back then. The birth of technologies such as Lambda Functions and Kubernetes was scary for those who thought about debugging. However, the developer no longer has to fear these technologies as a new era of debugging is bine ushered in to tackle the issues of the new world. Remote Debugging.
Remote Debugging, leveraging Non-Breaking Breakpoints allows developers to now set non-intrusive breakpoints at any line of their code base running in any environment. This will allow the Remote Debugger to capture all crucial insights such as messages and snapshots containing variable states at the point of the Non-Breaking Breakpoint. Better yet, the developer will procure all these insights without having to disrupt the flow of their systems.
The developer’s journey into the new world has just begun, and concepts such as remote debugging and Non-Breaking Breakpoints are still evolving to exactly meet the demands of the new age. Luckily these developers are not alone on their journey as companies such as Thundra are providing Remote Debugging solutions to help developers in their progress. Thundra’s solution is even more aptly named Thundra Sidekick, providing the much-needed sidekick that the developer needs in this journey.
As software practices and the way we release our systems to the world evolve, supporting practices are also evolving. Developers have already witnessed a breakthrough in debugging tools throughout this perpetual evolution, where each stage is accompanied by a new form of debugging tool or concept. With cloud computing and distributed systems on the rise, we also see a rise in remote debugging and third-party tools are taking notice. As we go through this new era steadfastly we can only imagine what the next era of tooling will look like. All we know though, is that the developer’s never-ending journey will be dotted with many achievements.