Many questions arise whenever adopting exploratory testing (ET), primarily due to misunderstandings about what it means.
The fact that exploratory testing is mainly unstructured, especially compared to the traditional scripted testing approach that is highly detailed and restrictive, makes it harder to connect it to the word “plan.” Sometimes, users call it ad hoc.
Does all this mean that we can’t prepare ourselves for exploratory testing and establish some level of plan for it? Not really. Let’s find out more ahead.
Let's say we have a new system to test, a new feature, or some evolution of an existing one. What can we do to prepare ourselves to test better? Do we need to do an extensive reading of all the current documentation? Do we need to look at all previous tickets?
Recapping one possible definition for testing: testing is about finding relevant information about different aspects of quality, as seen by various stakeholders (internal and external) that are relevant in our context.
Let's start by understanding which stakeholders have a significant say in the quality and try to answer some questions.
In general, we can prepare ourselves for testing in different ways:
To perform testing that gives us insights into quality, we need skills, experience, and experimentation (learning by continuously playing with the product).
As exploratory testing requires exploring the product in many different ways and learning while we do so, we need, among many others:
All these are essential for successful exploratory testing or, if you want, for testing in general.
There are several ways of exploring our product; we can use different tours as a high-level guide for our testing journey. We can use them as a plan for what we aim to test, but there are other alternatives.
Let's take a step back and reflect.
What is our mission? Do we aim to have a birds-eye view of the product and use testing more to learn? Do we seek to push for critical issues? What time do we have? How mature is our product? And our testing? Do we have any level of test automation in place?
We'll discuss four simplified ideas to help you plan your testing without using the tour concept. Your overall plan will have a mix of these ideas and others. In the end, our plans will be materialized using charters. Plan testing focusing on:
If we're iterating our product, adding new features, and having good test automation in place, we can focus on testing the new features.
The new features can have, or not unit and integration tests and even tests that cover acceptance criteria.
We'll have a set of user stories with different priorities. We can use that to define the order of what we'll be testing. A good idea is to assess what would be a successful sprint and a non-successful one.
On each feature, we need to understand the level of testing that has been done or not. Do we need to focus on what we already know, or can we look beyond it?
It's like looking at risk but more "low-level" (i.e., focused on the feature).
But then we also need to look at the risk this feature brings to the "outside world," the existing features, and the "non-functional requirements". Are we impacting performance now, for example? Or are we opening a door to affect performance in the future?
Try to make a collection of risks, discuss them with the team, and prioritize them.
Risks must be clear and detailed. Understand what's critical, acceptable, and unacceptable; don't assume; discuss with the team. Remember that testing involves finding a fair compromise.
Sometimes, we want to focus on a specific risk or quality aspect. Let's use performance issues as an example of a risk.
We could discuss together:
The previous questions can support us in building our testing plan.
Those aiming to improve testing that provides valuable insights to relevant stakeholders and aids teams in decision-making can prepare by gaining a deeper understanding of the product, team, stakeholders, and the business context.
Awesome testers continuously improve their skills, core knowledge about testing, new testing techniques, and new ways to expose potential problems. Testers can also learn more about the internal architecture of the product, how it interacts with the outside world, and how it is built. Testers can also learn about new tools that can augment their testing efforts.
There is always some level of planning in testing. The fact that we’ll be performing testing in a more exploratory way doesn’t change much about our plan.
However, exploratory testing usually complements scripted testing (e.g., manual/automated script test cases); therefore, our testing plan will be somehow affected by it. Suppose we have a batch of manual/automated test scripts that cover sanity and regression testing. In that case, we can focus our exploratory testing elsewhere, thinking about other things that can go wrong.