Blog - Xray

What to look for in a Quality Engineering toolchain - Xray Blog

Written by Ivan Filippov | Apr 4, 2024 1:01:39 PM

The transformation from “testing” to “Quality Assurance (QA)” and then to “Quality Engineering (QE)” has been quite impressive, and it likely has a few more surprises in stock. In our previous article of the "Top 5 software testing trends of 2024", we highlighted the overall “quality at speed” theme and several key trends such as DevOps and Automation. 

But what exactly do those trends mean for you as the buyer and user of QE tools? In this article, we will focus on what you should prioritize from the feature standpoint in 2024 and beyond.

 

Toolchain Analysis

To set the context, we will review the enterprise QE workflow across the following stages:

Stage 0

Before we dive in, a few top-level strategy notes:

 

  1. Holistic Enterprise Quality Management System (eQMS) mindset

While the stages above have unique aspects and are performed by different tools, make no mistake - we can no longer view them separately. In a world driven by Agile and DevOps, a holistic approach to quality is critical. If the parts are great but do not fit together, it is often not worth it - they have to speak “the same language” (format-wise, metric-wise, etc.), the “connectivity” and “traceability” features are vital at each stage. Therefore, we need to evaluate an enterprise quality management system (eQMS) as a whole. 

The goal is to streamline and automate the processes resulting in an efficient and coordinated workflow that accelerates the delivery of high-quality software. Such a platform acts as a single source of quality truth, from release management to deployment, with integrated tooling, quality checks, and metrics that meet business requirements. The tools responsible for stage 1 and/or 2 typically act as the “heart”, orchestrating activities across the pipeline and enhancing collaboration between development and operations teams. 

In such a QE mindset, while this article is focused on tools, we cannot forget the human component. The “full-stack” quality engineers are expected to master a mix of soft and hard skills to collaborate effectively with technical and business stakeholders to ensure quality is built-in from the outset. They should also be competent with a range of manual, exploratory, and automated techniques

 

  1. Configuration and deployment flexibility

Cloud-based, as-a-service tools have been growing in popularity but we are not quite in the “cloud-only” world, most companies and projects would still require the hybrid approach to some extent. So, across the stages, you should evaluate not only a tool’s cloud capabilities but also its options for seamless transitions and sufficient migration support between the deployment types.

 

  1. AI and automation

Given the demand for speed and efficiency, the automation-first approach is a natural goal for the entire QE toolchain. Smart technologies (including, but not limited to, AI) have applications across the stages and are definitely worth paying attention to and trying out. If you are interested in more AI-specific insights, you can check our latest blog posts on the topic.

 

Stage 1 - Requirements Engineering

To be on the same page, we refer to all the activities related to requirements as Requirements Engineering (RE). This stage is the backbone of project success; if done right, it allows you to avoid numerous risks, wasted time, quality and traceability issues upfront.

Key themes to evaluate:

  • collaboration; 
  • lack of ambiguity. 

In modern development practices, engineers need a complete understanding of the service/product and its use cases. Only then can a QE team tell if something has been implemented properly and if it satisfies the experience expectations of users. The reverse is also true - business stakeholders should not create requirements without assessing technology trade-offs.

That’s why collaboration is critical to align product quality risk and associated QE activities with business value. The tools in this stage should have excellent abilities to bring different parties together and to create artifacts in varied formats, to serve as common ground for brainstorming. 

The specific features to prioritize are multi-user editing (with version history), enhanced post-meeting documentation support (e.g. ability to easily convert brainstorming notes into requirement entities), and format diversity (text, graphics, recordings/videos, mock-ups/prototypes/simulations).

Second, there should be features to ensure clarity, consistency, and lack of redundancy. Some of the common implementation examples are templates, quality gates (look forward to our upcoming article on this topic), and formal review process with permissions and approvals.

 

Stage 2 - Test Design and Management

One of the key priorities for this stage is comprehensive end-to-end validation at speed, without over-testing or significant defect slippage. 

Key themes to evaluate:

  • versatility of test strategies;
  • speed vs thoroughness;
  • low-code and maintainability.

Test strategy in this case includes specifically test creation and planning. 

On the creation side, first, the versatility should include the support for different test types based on both the scripting (e.g. traditional step/expected result, BDD, recording, code-based) or exploratory approaches and the role (unit/functional/sit/e2e/security/performance). Bonus points for being able to easily distinguish between all the different types and to categorize them accordingly (e.g. as “Test” issues of 4 types with labels for the role).

 

Second, the creation process should be accelerated with algorithm-driven features that can help balance speed and thoroughness:

  • flexible generation settings (e.g. combinatorial, orthogonal array, linear expansion, random, etc.);
  • programmatic rule handling with complex dependencies;
  • data coverage analysis at the test case level.

Security testing support deserves a special mention. With the world becoming increasingly digital, incorporating robust cyber security measures into QE practices is no longer optional. Reputational damage and financial losses from breaches can drastically affect businesses. Therefore, we can expect tools for penetration testing, vulnerability scanning, and threat modeling to become indispensable in the QE toolchain. 

On the planning side, the flexibility of repository management, with different ways to group, filter, and assign test cases, is important to evaluate. Another aspect is the clear identification of usefulness - features like archiving and test case versioning help avoid storing excessive amounts of past artifacts.

Lastly, low-code testing platforms enhance collaboration between development and QE teams and accelerate test creation. These platforms streamline the testing process by enabling testers to create automated test cases with minimal coding expertise. Furthermore, low-code test artifacts are typically easier to maintain. That allows organizations to better adapt to rapid software changes and guarantee time savings beyond the first design iteration.

 

Stage 3 - Test Execution

We will focus on the desired features for tools specializing in automated and exploratory testing since manual execution is typically done from the test management tool. 

Key themes to evaluate:

  • versatility of supported platforms;
  • consistency of results.

To support all the versatility mentioned in this and the previous stages, we need to keep in mind the features for streamlined test data and environment management. Regulatory requirements, like GDPR and CCPA, continue to necessitate stringent data protection practices. That increases the popularity of synthetic data generation which can be a very effective feature, as long as the generation is meaningful, diverse, secure, and with as little bias as possible. 

For some of your environment needs, you can also keep an eye on Testing Infrastructure as a Service (TIaaS) offerings to automate the setup and management of testing infrastructure through code. This approach can enhance scalability, flexibility, and cost-effectiveness. 

Back to the features for stage 3, most of the products and services nowadays have omni-channel releases. Until the true “all-in-one” tool comes along, you will need to evaluate each tool’s breadth of supported platforms (e.g. mobile, desktop) and underlying technologies (e.g. Java, .NET, CEF) depending on the needs of your projects. Large enterprises will likely have to use two or more automation frameworks.

 

 

Regardless of the execution environment, you need to look for the features that improve consistency from 2 angles:

  1. Flakiness - not only self-healing, but also features allowing you to easily reproduce the results (e.g. logs, evidence gathering);
  2. Deliverables - consistent report structure- and metric-wise (e.g. summary for each test, elapsed time in the same units, etc.), with a customizable level of detail to satisfy different stakeholders.

If you are interested in more specifics about Xray and Xray Exploratory App for this stage, please see our article:

 

Stage 4 - Continuous Improvement

The initial run through the optimized stages 1 through 3 is rather complicated, no doubt, but doing it for every release is the truly tricky part. The QE evolution forces us to move beyond just “reporting & analytics”. Identifying areas for adjustments and implementing changes in a timely manner is critical for maintaining high product quality. 

Tech-driven continuous improvement, with tools such as data analytics and performance monitoring software, enables businesses to identify bottlenecks, inefficiencies, and quality issues, allowing for swift corrective actions. It also ensures consistent adherence to quality standards and protocols across the organization.

Key themes to evaluate:

  • observability;
  • business intelligence.

In the world of DevOps just monitoring is rarely sufficient - observability becomes an integral quality mechanism. It brings a wider scope and visibility, incorporating extra situational and historical data. Observability looks at the distributed system as a whole and enables investigation into the root cause of issues that arise due to multi-component interactions. 

Also, to maximize the value of failure analysis from QE efforts, you would want robust error-handling and messaging capabilities from all the tools in the previous stages.

Lastly, you will need powerful BI tool features to synthesize, process, and visualize the data in the way that is most helpful to the current business needs. And, going back to “connectivity” we mentioned at Stage 0, arguably the most important part of this stage is having the features to feed the results back to stage 1 and to incorporate them into new release iterations.

 

“The puzzle of the next-gen quality engineering approach”

The World Quality Report 2023-24 highlights that quality is now a boardroom priority and emphasizes the evolution from conventional testing to agile quality management practices. To underscore some of the key takeaways from the report:

  • Boost business performance by embracing change and foster a culture of continuous improvement;
  • Have a set of key KPIs or metrics that can be easily understood across the organization;
  • Accelerate the inclusion of cloud and infrastructure testing as part of the software development lifecycle, to improve resiliency, security, redundancy, and data recovery;
  • Give equal priority to the non-functional aspects (performance, security, scalability, usability) as they significantly influence the end-user experience.

As quality assurance matures into quality engineering, enterprises find themselves compelled to adapt and innovate, striving to keep pace with the ever-evolving demands. Embracing these trends is not just about achieving more robust and reliable software; it is about strategically positioning businesses for success in the increasingly competitive digital world.

Xray fits the QE transformation journey thanks to:

 

If you are still looking at testing from a QA perspective, maybe it's time to embrace some of the points we mentioned. We encourage you to start conversations within the team to identify areas for embedding and improving quality, then choose the right tools to accomplish that.