Get started
Insights

Performance Test Readiness: From "Ready, Set, No" to "Ready, Set, Go"

Performance Test Readiness: From "Ready, Set, No" to "Ready, Set, Go"

Introduction

You can improve the chances of performance testing success by doing the upfront work in a consistent and comprehensive manner. Like the quote, “Fail to plan, plan to fail”, it’s better to take time to properly plan and in the process, identify gaps that would impede effective performance testing.

You don’t want to assemble a large technical team to participate in test execution only to see them fail due to poor fundamentals in the preparation process. You can set yourself up for success by following a documented process combined with a performance readiness checklist well before executing a test.


 

Performance testing is a complex discipline that spans multiple functions within the software development lifecycle (SDLC). Performance engineers require a set of skills that are built over many years of learning and practical experience with various techniques and tools.

Good performance is essential to both enterprise applications and e-commerce websites. It’s well known that systems that suffer from poor performance can negatively impact:

  • Brand reputation
  • Customer loyalty
  • Revenue
  • User experience
  • Confidence of success when deploying new application code, configurations, or infrastructure upgrades
  • Employee productivity

PerformanceTestReadiness_Graphics1

Let’s delve further into this topic by covering “people.” Without addressing this element of performance test readiness, a failure to deliver positive outcomes is almost guaranteed. That said, how can we position ourselves to succeed?

People

People are your company’s most important asset. It’s critical to have the right people with the necessary skills in the Performance Engineering discipline working with development teams across the SDLC and addressing various aspects of performance.

Beyond individual contributors, leadership support is vital to the successful implementation of a performance testing function. Defining realistic goals, desired business outcomes, and implementation timelines with management will help drive that support.

Far too often, team members with some coding or tool experience are tasked with owning the performance testing activity. This rarely achieves desired outcomes, and places those team members in precarious positions.

Any business that builds and supports large-scale applications should understand that there are different facets to Performance Engineering. At various times during the programs and projects, different sets of skills are required. There is no one-size-fits all approach here. The perfect combination of skills that fits every organization is not a realistic goal.

We would recommend that you assess your organizational needs, evaluate the application environment(s) and determine the nature of roles required based on the different support and development activities your teams need to perform. Most often, we find that individuals possessing the necessary skill sets do not initially reside in-house.

Process

When we talk about processes specific to performance testing, we’re focusing on methodology, standardization, and reusability. To be clear, each performance engagement is unique, so we don’t want to attempt to fit the proverbial “square peg in a round hole” for the sake of being “standardized.” There needs to be a level of flexibility, without sacrificing a consistent and proven methodology.

It’s important that processes utilized be easy to follow and not overly complicated. Document templates should be developed for customary deliverables, such as project discovery, level of effort estimation, overall performance approach, test plans, execution playbooks and analysis reports, to name a few.

Since programs and projects vary in degrees of complexity, your document templates should accommodate this nuance. Using the “square peg in a round hole” analogy, a test plan authored to evaluate performance of a large ERP implementation will look much different than a test plan for a single page application (SPA). Creating lightweight versions of your process documents is a good strategy to avoid introducing unwarranted overhead.

Standardization and reusability go somewhat hand in hand. The creation of standard documentation, allowing for a plug-and-play type of usage, conveys many benefits including reusability. This approach allows all performance team members to apply proven techniques throughout the SDLC and produce a consistent set of high-quality deliverables.

Additionally, and equally important, applying these practices will naturally decrease the chances of a performance testing effort going off the rails. The goal is always to generate high value from testing, and subsequently better business outcomes.

One final point, it’s good practice to implement a system of checks and balances that help support less experienced team members. To support your junior performance team members, we suggest test plan reviews, a shared knowledge base with readily accessible project information, peer programming for performance scripting, maintenance of a reusable code repo to further accelerate script development and shadowing of senior performance engineers during test analysis, report generation and result presentations.

Technology

I’ve covered the people and process shifts needed to best position a performance testing project for success. I would assert these are by far the most important parts of the equation, but regardless, we find that tool (i.e., technology) discussions are usually one of the first things that come up during project inception.

There’s an oft-quoted line which resonates to many, “A fool with a tool is still a fool.” Tools are simply a means to an end. While tools are required to get the job done within a performance testing engagement, they are not the “end” itself. To achieve the right tool mix, a thorough assessment is necessary. It is essential to understand the following:

  • System load requirements
  • Application/infrastructure technology stack and location
  • End user system access methods
  • Test environments
  • Interfacing system dependencies
  • Test data management
  • Release cadence
  • Current tool ownership
  • New software procurement considerations

Some of the above information will likely be gathered during the discovery phase of the project. However, it’s important to have a firm grasp of these matters, which will help inform tool recommendations and decisions.


 

You may also like...

The role of DevOps in the blockchain world

3 min By Forte Group

Data Engineering Consulting: Key trends that shaped 2024

3 min By Forte Group

Front-End Options for Data Visualization: Practical Insights

4 min By Andrey Torchilo
More Insights