Get started
Insights

Exploring AI in Software Testing

Exploring AI in Software Testing

 

The Role of AI in Software Testing

Today, we're taking a close look at how AI can support and extend our software quality and testing efforts. I’ll start by noting that I won’t be mentioning specific tools, aside from a few of the more well-known large language models (LLMs). There’s a good reason for that. As you’ve probably noticed, the world of AI testing tools is moving fast. Established companies are tacking on AI features to existing products, entirely new AI-driven platforms are springing up, and niche tools are appearing that focus on single steps within the testing process. By the time you finish reading this article, anything I mention about a specific tool might already be outdated.

 

Also, I think it’s risky to approach AI testing tools headfirst. It feeds into this idea that an AI-powered testing tool can magically solve all your problems. That’s a trap. To get the most out of AI in testing, we need to understand how it can best help us, not just rely on whatever the marketing buzz is telling us.

The pitfalls of relying solely on AI tools

I remember how people were buzzing about record-and-playback test automation back in the day, as well as “codeless” test tools more recently.  We’ve all heard similar promises before. The issue is that the expectations are set by the companies selling the tools, instead of the testers who actually use them.

 

It’s important that you be in control when it comes to how you use AI in your testing. While AI can certainly bring a lot to the table, it’s essential to make sure you’re setting the terms for how you’ll use it. Don’t just jump on the bandwagon because it’s the hot new thing. I’ve seen too many teams adopt the latest trends because of buzz, only to end up worse off because they never figured out how it applied to their actual environment. Let’s not make that mistake with AI.

Re-evaluating the basics of software testing

Before we talk about where AI fits into testing, we need to back up and remember what good software testing is in the first place. Testing isn’t just about checking if something functions; it’s about thinking critically, experimenting, and exposing the weaknesses in a system. And testing has to align with your business needs, not operate in isolation or exist just to “check a box.” 

 

Good testing gives us the information we need to make decisions about the software. If all we’re doing is proving the system works based on what’s been written in the requirements, we’re missing out on the real value that AI could bring. Even worse, we might find ourselves putting too much trust in AI-generated results when we should be maintaining a level of skepticism.

Let’s consider a common approach to software testing in many organizations:

  • You start with a set of requirements
  • You create tests to cover them
  • And then you demonstrate that the system works based on how someone interprets those requirements.

That’s a narrow, ineffective view. It’s not about finding problems; it’s about proving something works, which defeats the actual purpose of testing. If that’s the mindset, you’ll be missing opportunities to benefit from AI technology.

 

To make AI work for us, we need to broaden our thinking. Testing involves much more than running a few scripted steps. It helps to think about all the activities you perform as part of software testing and think critically about where AI technology can assist.

Where to begin with AI in testing

Let’s take a closer look at how AI can add value to testing. In test planning, there’s been a lot of buzz about feeding requirements into tools or LLMs and directly generating automated tests. While that sounds promising, we need to tread carefully. Are the requirements clear and complete? Are we really thinking about risk, or are we just churning out a bunch of generic tests that won’t do much good?

ExploringAIinSoftwareTesting_Graphic1If you’re thinking about how to incorporate AI into your testing process, start by pinpointing where it could be most helpful. Common pain points like test creation, automation, and results analysis are good places to begin. AI can help analyze requirements, generate test ideas and data, and even automate certain tests. It can also help sift through test results and identify potential issues, allowing you to focus your attention where it’s most needed.

 

  1. Test creation:
    LLMs and AI-based tools can assist with the process of creating tests, but as discussed above, going directly from requirements to tests that create value for the organization is not realistic.  However, if we break that process into steps and evaluate how AI can augment our efforts at each step, we will likely identify significant opportunities to increase our efficiency.

ExploringAIinSoftwareTesting_Graphic22. Text execution:
AI can help with parts of test execution, like prioritizing regression tests based on recent code changes and past data. And there’s been progress in automation, with tools that can dynamically locate UI elements using natural language or self-healing techniques. These features can certainly ease some of the pain points in test automation, but we have to be cautious about leaning too heavily on AI to take over roles that require human insight. ExploringAIinSoftwareTesting_Graphic3

3. Error ID & Analysis
When it comes to error detection and analysis, AI is an incredibly useful tool. It can process large amounts of data, such as performance logs and other application telemetry, to find patterns and flag potential issues that might otherwise be overlooked. This ability to quickly analyze data can help surface problems sooner, reducing the cost of mitigating any identified issues.

ExploringAIinSoftwareTesting_Graphic4

Avoiding the magic box mentality

AI is not a miracle cure. If used well, AI can definitely give your testing efforts a boost. But to really make AI useful, you’ve got to have a strong foundation in software testing. If you don’t set clear goals and understand what you’re trying to achieve, you’ll end up letting others define your objectives.

 

We also need to avoid treating AI like some kind of magic box. That mindset will stop us from making thoughtful decisions about where AI can really help. Whether you’re using an AI-driven commercial tool or experimenting with your own prompt engineering using LLMs, you need to have a basic understanding of how AI works. This way, you can question its outputs and use your own expertise to decide what’s useful.

LLMs and Prompt Engineering

I also want to mention prompt engineering, a skill that’s becoming increasingly important when using LLMs like GPT in testing. The quality of the LLM’s output depends heavily on the quality of the prompt. Crafting clear instructions, specifying formats, and breaking tasks into manageable steps can significantly improve the results you get from AI.
ExploringAIinSoftwareTesting_Graphic5

Check out OpenAI’s Prompt Engineering Best Practices for more information.

 

We can also enhance our prompts through a technology called Retrieval Augmented Generation (RAG).  RAG frameworks allow us to augment our prompts by pulling relevant information out of bodies of data related to the system we’re testing including requirements, tests, product documentation, code, etc.  There are many tools and libraries available that can allow us to implement a RAG framework fairly easily.

Moving Ahead with AI in testing

So, to wrap things up, here are the main points to keep in mind.

1. AI has the potential to enhance how we approach testing, but it’s not going to replace what we bring to the table. Use it to take care of the more tedious parts—generating test ideas & test data  and analyzing large sets of results—but don’t disengage. Keep questioning the results and make sure you’re staying in control of how AI is being used. 

  1. It’s important to have a basic understanding of the concepts behind AI. The more you understand, the better equipped you’ll be to ask the right questions and challenge what the AI output is telling you. Just because an AI tool spits out a confident answer doesn’t mean that answer is right. Use your experience to assess it carefully. 
  2. And finally, don’t forget about data privacy and security. Make sure you’re protecting sensitive information as you incorporate AI into your testing processes. That’s one area where you can’t afford to cut corners.

 

 

You may also like...

Developers and GenAI: Skills for the Future

3 min By Humberto Bringas

Enhancing team and client engagement through satisfaction surveys

2 min By Katerina Turchina
More Insights