Success. We all seek it.
However, what defines success?
It all depends on the goals we have established.
We often hear that testing is about ensuring there are no bugs. That's wrong in many ways. First, testing doesn't assure; testing gives information about quality so the team can decide what to do next, hopefully using it to improve its current status. Second, having no bugs is platonic; it assumes we all have the same understanding of quality, which we don't, by the way, and it also implies that we could test the product in infinite ways, which, of course, we can't.
What is a more correct goal for testing that ultimately defines its success or not? Are there particulars for exploratory testing?
What defines success in exploratory testing?
We can say that, in general, exploratory testing is successful if we can:
- Find relevant information about quality;
- To stakeholders that matter;
- While adopting the optimal exploratory testing approach.
Let’s drill down into each one of these.
1. Relevant information about quality
Finding issues without any criteria is easy. Finding problems that matter is harder.
Quality is multidimensional, so we can look at different quality criteria. We cannot look at all of them because we have to manage the available effort/time.
Therefore, we are successful when we can:
- Target quality attributes that matter most to our stakeholders;
- Experiment and investigate our product timely enough and deeply enough so that we have a good enough understanding of the existence of significant problems (or potential major problems that may come ahead).
Looking at the last point, we can see that it has a bunch of subjective aspects: "timely enough"; "deeply enough"; "good enough". It means that, in the end, we will have a subjective quality assessment.
The tricky part is that we connect success in testing with finding bugs. Finding bugs is one of the outputs - not the only output. We can succeed in our testing goals but still have not found any problems. That's fine because our goal is to find information about quality (i.e., perform a quality assessment).
Therefore, even though the result of our quality assessment is relevant to the team and other stakeholders, the success of exploratory testing is not only dependent on what we find.
Returning to what was mentioned earlier, we are successful if we target relevant quality attributes and can use the best of our testing skills to expose important risks to these attributes.
What is value? What can impact it?
These core questions will help us tailor our testing.
This is not specific to exploratory testing; this applies to testing.
2. Stakeholders that matter
In testing, no matter the approach, we look for quality information that is relevant to stakeholders that matter. We are not on the right track if we find information that is useless.
Let's see some typical examples of stakeholders and a very, very limited set of what they might care about:
- Customers & end users
- Easily complete successful user journeys;
- Comprehensive feedback whenever problems arise;
- Fast feedback in the UI and "acceptable" feedback in operations
- Accessible features;
- Ability to track historical changes and operations.
- Business
- Ability to scale with additional demand;
- Ability to turn off features, for example, if they're having issues;
- Ability to try out features on a user segment.
- Product team
- Ability to track features most used and the ones not used;
- Ability to track requests (through monitoring and observability);
- Ability to track errors.
- Marketing
- Ability to track features most used and the ones not used;
- Ability to understand where people spend most of their time.
Are we considering exploring these or other concerns identified by your stakeholders during our testing?
Our testing is only successful when we have the stakeholders' requirements in mind.
3. While adopting an exploratory testing approach
Whenever thinking about exploratory testing, we can implement the whole “process” in different ways.
We can look at success from a process perspective, at each " component”. As exploratory testing can be adopted in many different ways, you must look at your context:
Charters
- Did we plan the "best" test charters? Or could we do better? Why?;
- Were your charters tailored in the best way toward your overall testing goals?;
- Do people understand the charters concept?;
- Were you able to commit to your initial testing charter and its mission?;
Time-based sessions (i.e., limited-time sessions)
If your team adopts the concept of sessions (or Session-Based Test Management) with a limited, focused time for testing:
- How successful were you in adopting these sessions?;
- Were you able to commit to them and stay focused?
Sessions
- Could you perform the sessions you planned?;
- Did you spend more time exploring/testing the product or doing side work?;
- Did you involve others from the development team in the exploratory testing sessions?;
- Did you go deep enough to explore the product, given your initial goals? Note that sometimes a birds-eye view may be enough, but we need to pay more attention to others due to the potential implicit risk(s).
Process
- Could we combine exploratory testing with traditional scripted testing, like automated tests, to augment existing testing?;
- Let's involve development and other teams in exploratory testing, as each brings a unique perspective;
- Could we perform exploratory testing early, even before the software gets implemented?;
- Did we have the opportunity to try out new tools or testing techniques that could leverage our testing capabilities?;
- Based on existing data, can we also perform exploratory testing in production through observability by exploring and trying to understand what is happening and might be happening in the future?
Outputs
During exploratory testing, testers can produce a set of outputs other than a simple pass/fail:
Typical outputs:
- Detailed bug reports for problems identified; these should either have logs, screenshots, videos, or other information that can facilitate their reproduction;
- Ideas for test automation;
- Notes;
- Questions to clarify with stakeholders;
- Ideas for potential improvements;
- The quality assessment indicates our confidence level in what we tested; can it be shipped?;
- Coverage level, to indicate if we were able to test just at the surface or go to edge cases;
- The overall conclusion of the testing session.
We can have an idea of success around the existence of these outputs. However, these outputs only need some to exist simultaneously. Additionally, and as a mere example, the quality of a bug report is subjective.
Beware of false success metrics
Beware of false or incomplete metrics preventing you from performing good testing.
A few examples include:
- Having found no issues
- This can either mean that our product is good enough or that we haven't performed testing in a way that could expose issues.
- This can either mean that our product is good enough or that we haven't performed testing in a way that could expose issues.
- No, few, or many significant issues found
- No matter how we define success here, and similar to the previous point, this can be a consequence of the current quality status of the product or the testing we have performed on it.
Note: We don't control the quality of the product; we can get a good enough idea about its current quality.
Success is a path
Measuring success in testing is hard. We should look at what comes from production and use it as feedback to improve our overall testing approach. If major issues escape, that's a clear sign that we must improve.
As testing supports the product team and other stakeholders, we must establish an ongoing conversation with them to clarify what we're testing, if that makes sense, and what we could not test if we communicate our findings most effectively.
Additionally, we must maintain exploratory testing of how the team works. Were we able to incorporate it effectively and efficiently into the existing processes?
Achieving success requires continuously improving and incorporating the findings in our testing process.