Get started
Insights

The best worst QA (and Organizational) metrics

The best worst QA (and Organizational) metrics

Quality Assurance (QA) organizations play a crucial role in ensuring the delivery of high-quality software products. To measure their effectiveness and track progress, QA teams often rely on metrics. However, in their pursuit of quantifiable data, QA organizations can fall into the trap of overusing certain metrics while neglecting the big picture.

One common reason for overusing metrics is the perception that it provides a quick and easy way to measure quality. Metrics that are easily measurable and provide readily available data may be favored, even if they fail to capture the complexity and nuances of the entire quality landscape. Additionally, organizations may have a cultural bias towards metrics that emphasize quantity over quality, such as counting the number of test cases executed or defects found.

 Here are the three commonly-used QA metrics that I’ll cover in this blog post:

  • Automation Coverage Percentage
  • DRE (Defect Removal Efficiency)
  • Test Case Pass Rate

So what are insights that we are not getting from these metrics?

 
Untitled1

Automation coverage percentage

Automation Coverage refers to the percentage of test cases or functional requirements that have been automated in a software testing process.
While Automation Coverage Percentage is a valuable metric in assessing the extent of test coverage achieved through automation, it should not be relied upon as the sole indicator of overall quality.

Its limitations:

  • Limited Scope: Automation Coverage focuses solely on the percentage of test cases automated, which fails to capture the effectiveness of those tests or the areas that remain unautomated. It may give a false sense of security if a high percentage of test cases are automated but critical scenarios are left untested.
  • Quality vs. Quantity: Automation Coverage does not differentiate between the importance and complexity of the tests automated. It doesn’t consider the depth of test coverage or the quality of the automated tests themselves. Simply automating a large number of tests does not guarantee high-quality testing or reliable results.
  • Maintenance Effort: Higher Automation Coverage often comes with increased maintenance effort. Maintaining a large suite of automated tests requires regular updates, maintenance, and debugging. Focusing solely on coverage percentage may result in excessive time spent on maintaining low-value tests rather than improving the overall quality.

Defect removal efficiency (DRE)

DRE (Defect Removal Efficiency)  measures the effectiveness of testing activities in identifying and removing defects. While DRE can provide some insights into the quality of the testing process, it is not considered the best metric to capture overall software quality. Here are a few reasons why:

  • Incomplete Coverage: DRE focuses on the defects found and removed during testing, but it does not account for undetected defects. It is possible for a system to have a high DRE but still contain significant defects that were not identified during testing.
  • Bias towards Detection: DRE measures the effectiveness of defect detection rather than prevention. It places more emphasis on finding and fixing defects after they have been introduced, rather than preventing them from occurring in the first place. While detecting defects is an essential part of ensuring quality, prevention and early intervention are equally important aspects that DRE does not consider.
  • Lack of Customer Perspective: DRE does not directly consider the satisfaction or requirements of end users or customers. It measures the internal effectiveness of the testing process but does not assess whether the software meets the needs and expectations of its intended users. Ultimately, software quality is determined by how well it satisfies users’ requirements.

Test case pass rate

Test Case Pass Rate measures the percentage of test cases that pass successfully. While this metric may seem straightforward, it suffers from several limitations that can undermine its effectiveness as a comprehensive measure of software quality.

  • Lack of Test Case Coverage: Test Case Pass Rate focuses solely on the pass or fail status of individual test cases without considering the overall coverage of test scenarios. This metric fails to capture the breadth and depth of testing, potentially leaving critical areas of the software untested. A high pass rate does not guarantee thorough coverage or identify areas that may require additional attention.
  • Inadequate Differentiation: The Test Case Pass Rate metric treats all test cases equally, regardless of their importance or impact on the system. It fails to differentiate between critical and non-critical test cases, giving equal weight to trivial and mission-critical functionalities. This can lead to a skewed perception of quality, where a high pass rate may mask significant issues in crucial areas of the software.
  • Neglect of Exploratory and Ad hoc Testing: Test Case Pass Rate focuses on pre-defined, scripted test cases. However, exploratory testing and ad hoc testing, which involve spontaneous and unscripted scenarios, play a vital role in uncovering unexpected issues and user experience flaws. These testing approaches are not adequately captured by the Test Case Pass Rate metric, resulting in a limited understanding of the overall quality of the software.

I’m not saying that the above metrics are bad, I’m stating that overreliance will not give organizations a complete picture of where and what they should be focusing on in order to improve.

 

Untitled2

Bonus: additional QA metrics to consider 

Here are additional QA metrics that organizations should be capturing and evaluating regularly:

  • Code Coverage: Code coverage measures the percentage of code that is exercised by automated tests. It helps identify areas of the codebase that lack test coverage, potentially indicating areas where defects may be lurking. Higher code coverage generally indicates a more thorough testing effort. (example of tools??)
  • Mean Time Between Failures (MTBF): MTBF measures the average time between software failures. It is commonly used in systems where failures can have significant consequences, such as mission-critical or safety-critical systems. A higher MTBF indicates greater reliability and stability.
  • Time to Restore (TTR): TTR provides insights into the organization’s ability to quickly identify, diagnose, and rectify issues, minimizing the impact on users and the business. A shorter TTR indicates a more efficient incident management process, as it demonstrates the team’s agility in responding to and resolving incidents promptly
  • Customer Satisfaction Score (CSAT): CSAT measures the satisfaction of customers with a product or service by collecting feedback through surveys or ratings. CSAT provides valuable insights into how well a product meets customer expectations and helps identify areas for improvement.
  • Time to Market (TTM): TTM measures the time taken from the product development initiation to the product’s availability in the market. It evaluates the efficiency of the development and release process, including design, development, testing, and deployment. TTM is critical for staying competitive, as a shorter time to market enables organizations to respond quickly to market demands and customer needs.

To sum up, organizations must move beyond relying solely on preferred or commonly used metrics to evaluate their performance. Instead, they should adopt a comprehensive approach by considering a wide range of data points and metrics to gain a holistic understanding of areas for improvement. This data-driven approach enables organizations to make informed decisions, prioritize initiatives, and drive continuous improvement in areas that truly matter for the overall process and product quality. The approach ultimately leads to enhanced customer satisfaction and business success.

You may also like...

How LLM Agents Can Improve Distributed System Architectures: The DLQ Use Case

6 min By Forte Group

How AI Will Transform Wealth Management

2 min By Lucas Hendrich

Innovate to Elevate: Leveraging GenAI Tools for Engineering

3 min By Forte Group
More Insights