4 Reasons Why Automated Testing Fails

Most folks in the IT industry would agree that automation makes things easier. Folks would also agree that testing is needed to deliver quality products. Given that most of us agree on these points, the idea of automated testing should be a slam dunk.

The reality is that automated testing isn’t always a slam dunk. I frequently encounter leaders who question the value of automated testing, even after their organizations have invested considerable time and money into building frameworks and expanding automation coverage.

The reason for this skepticism toward automated testing is that it often fails. Here are four of the most common reasons I’ve observed that cause those failures—based on both successful and unsuccessful test automation projects—and how failure can be avoided.

Misunderstanding (and Miscommunicating) the Value of Automated Testing

One of the biggest “fails” occurs when an organization expects that automated testing alone will make a product better. Automation will shed the light on problem areas—whether it’s poor code, application design, or other challenges—but it’s up to the product team to either patch or refactor any problems that surface.

Phrases like, “We have too many bugs. We need automation!” or, “We need to automate our testing so we can release faster!” cause automation engineers to wince. Both phrases betray a misunderstanding of the role of automation in development. Automation helps detect bugs; it doesn’t fix them.

Automation detects bugs; it doesn’t fix them.

This confusion about the role of automation often causes business leaders to question the value of automated testing altogether. After an automation platform is built (an undertaking that requires a substantial investment), businesses want to know how their money is being spent. Months after the investment, if the number of bugs hasn’t decreased and delivery is still slow, it’s natural that leaders would question the value of automated testing.

For automation to produce value, it must be viewed as a complementary tool that streamlines the work of a quality assurance (QA) analyst. If you think of automated testing as a savior that will fix delivery and quality issues, you will almost certainly be disappointed.

Automation engineers and delivery teams have the responsibility to effectively communicate the value and capabilities of test automation to business leaders. Without clear goals and expectations, failure is likely.

Not Treating Automation as a Necessary Part of Product Development

“Automation: it’s just a bunch of scripts.”

I’ve heard this statement way too many times. One of the reasons this misconception exists is because, for many organizations, automation is just a series of scripts.

When it comes to building software, organizations usually talk about building a quality product. “It should be done right.”

I agree. But “right” can mean many things.

Usually, we begin by trying to understand what value a software product can provide, followed by what it takes to properly design it. When developing the product, we think about code maintainability and the ease of deployment. Throughout this process, proper logging and the ability to capture and analyze run-time metrics are as important as the actual application’s features.

Somehow, when it comes to test automation, most software development best practices are put aside. Automation in testing turns into the conversion of manual test cases to code. I’ve seen organizations build thousands of scripts—all with little structure or value. These scripts are impossible to maintain, and no one knows what’s being covered or executed, often because “the guys who built those are no longer here.”

Automated testing platforms, QA, and test code can’t be treated as second-class citizens. These practices aren’t a luxury—they’re essential to the production of quality software and must be built, maintained, and refactored to evolve with an application. To regard QA as anything less ensures that it won’t succeed. Test code isn’t “like coding.” It is coding.

Reinventing the wheel

How often have you been asked, “what framework are you planning on using?” Having been on the receiving end of dozens of automation consulting services, I’ve seen way too many proposals explaining how “Framework A” is better than “Framework B,” and why I should invest in it.

In the early days of automated testing (before open-sources projects, etc.), some testing framework companies held a competitive advantage. Those companies turned their frameworks into successful products and ended up selling their tools (or the whole company) to the highest bidder.

Today’s world is a bit different. Yes, new technologies, languages and testing techniques are introduced almost every day, but when it comes to automated testing, we should think about making things simple and reusable. Pick a widely used framework with a large user community and run with it. While you may not be seen as “the most innovative person on the block,” your team will thank you for not introducing yet another custom piece of software to their already complex software ecosystem.

To clarify, I’m not saying that there’s no room or place for custom solutions. There are many situations where existing tools and frameworks may not work. Ideally, the decision to use a “custom framework” should be driven by the true needs of the organization, rather than be a gimmick that gets a new vendor or technology in the door.

A Lack of Transparency

By default, automated testing usually comes in at the tail end of the development process and, quite often, without being tracked or spoken about. Change that. There is no value of building something in a vacuum and not talking about it. Status and progress reporting should be as important as the automated scripts themselves. What good is a test if no one knows the results?

With reporting, it can be difficult to know where to start. Here are some ways, even without a robust reporting system, you can make testing more transparent:

  • Working with multiple environments? Start with one automated test and show how this test runs across these environments. Make one test run from beginning to end, then add another test. Repeat and scale this process.
  • Working with a DevOps team? Work with them to integrate automated tests into their continuous delivery (then, continuous testing) pipeline to aide with environment and deployment validations.
  • Working with distributed development teams? Create a simple dashboard showing your daily test run status. Or, just start with the simple daily email.
  • Need to “report up” test automation progress? Create a one-pager with a few bullet points describing business value rather than the number of tests executed.

The Verdict: Why Test Automation Fails

To sum up, successful test automation projects are not just about code. They’re about level-setting on what business value you’re trying to achieve. That focus on value requires proper planning while keeping it simple and being transparent. With all of these, automated testing can—and should be—a slam dunk.

Alex Lukashevich

by Alex Lukashevich

Alex’s role is to empower people and companies through the use of technology to help them define and deliver value. He specializes in design and development and owns several patent applications in the U.S. and abroad.

Start a Project

Start here to accelerate or advance your business

Plan a Project

Answer a few questions and find the right software delivery model for you