Exploratory testing, the backbone of agile development, allows for both a confirmation of what we’ve built being good enough and for a critique of what we’re building. But good exploration takes time, possibly more time than a single tester has within the team. So what can we do? We can ask our colleagues to support us and run some exploratory testing with us.
Asking developers to start exploratory testing can be hard; many testers haven’t had the opportunity to practice this type of testing, let alone developers. This article will discuss the steps and tips for getting your team (with a focus on developers) involved in exploratory testing.
People are more likely to support something if they know what it is and how it’ll help them. In a previous article I’ve written: Why I’m Talking to Developers About Exploratory Testing I talk about how I share the concept of exploratory testing with a team. In addition to this we can detail all the ways in which exploration can specifically help developers:
Part of selling the idea of being involved in exploratory testing to a team will also include giving them an idea of what to expect. We need to share with them the processes we’re proposing, what’s expected from them and what training we’ll support them with.
A note on selling and training your team on exploratory testing. This will not be a one and done thing, people will need reminders and refreshers over time (or when people join the team).
When the test champion in the team is doing the exploratory testing we will still want developer involvement.
If we explore a user story (or feature) that has yet to be developed for risks, we want to be able to share that with developers so that they can preemptively build fixes in for those risks. We can do this by first, explaining up front that we plan to do this and it can help give more insight as to what’s needed alongside the acceptance criteria of a user story.
Once we have a risk assessment (documented in a shareable and understandable format) we can add this to the user story ticket, say that we’ve made it available in the next stand up and offer to walk through it with the team / developer who picks up the story.
Exploration can be infinite, by sharing our risk analysis early we can get an alignment within the team on how much testing is needed for something to be good enough. When we share our risk analysis, this is similar to proposing a scope of testing. We need to remind our team that this is a proposal and isn’t set in stone, it’s the start of a conversation that they’re a part of.
Developers should then review what risks we’ve raised to say whether they think they’re important things to look at now, what can be ignored and even to suggest risks that might not have been thought of.
I’ve previously written about the awesome power of the debrief, but put simply a debrief is a conversation between developer and tester to discuss the exploration that’s happened. The discussion covers what was tested, what was seen (good and bad) and looks to determine if there are any fixes needed from issues seen and whether we need more testing to be confident that the thing under test is good enough. To debrief we need to share and walk through the notes we’ve made during our exploration (highlighting why it’s important to make notes as we explore) and ask the developer’s opinions on what has been done.
Sharing risks up front to align on what could be a problem for a feature and debriefing after testing should be the minimum expected engagement for developers with exploratory testing.
If we want to get developers more involved in exploratory testing, to train them in how to do it or to get their insights on an area, a good place to start is pairing. I wouldn’t start by asking people to jump in and explore on their own as they’ll lack context and may form bad habits without proper context and training.
Pairing in exploratory testing usually means we have a driver and a scribe. The driver’s role is to use the system under test, primarily come up with the test ideas for the charter and to describe what they’re doing and seeing. The scribe’s role is to document what has been done and seen, offer suggestions for tests and to keep the session on track (to time and scope).
When initially training a developer to support exploratory testing it’s a good idea to start them as a scribe, this way they can see what you’re doing from a testing perspective and also add insight and ideas. Remember to not only comment on what you’re doing but also why you’re doing that, so the developer pairing with you can start to understand the testing process. Also encourage participation and insights actively by asking them outright for suggestions.
In some situations, such as debugging a system, we might ask the developer to help us with technical exploratory testing. This might include adding breakpoints in code as we chase data so that we can see what’s been sent where. They might also be able to add console logging to services to help us see what’s happening and track consistencies and incorrect behaviors. In the past I’ve worked with data scientists and data SDETs to create python queries over data sets and models to explore scenarios and data behavior.
Sometimes, when the team needs support to get things done or finished, it’s an option to swarm on exploration and testing. In these situations the team may ask developers to get involved in testing directly; by running exploratory testing charters themselves. In these situations, if we haven’t had time to train directly through pairing then we need to support the developers and help them to get involved.
From risks that have been identified, develop clear test charters to share with the team. These should be clear to help the developers understand what to cover and simple so they’re easy to read. I use the Elizabeth Hendrickson “explore… with… to discover…” format to make a clear and easy to read sentence.
Fun idea: Instead of calling these tests or charters, call them missions and ask the team to run a mission for you.
Create the tests in your test management tool and then share the tests (and links to them in the tool) on Slack.Be prepared to coordinate who’s picking up what as the swarm progresses.
It’s a good idea, before the swarm, to give some training to the developers to tell them what’s expected of them. Some areas to cover are:
Depending on the maturity of your processes and team, you may have these things documented to share. If not, run a short workshop to go over these topics and be prepared to support the developers with any questions they have.
Frequently we see the narrative that developers shouldn’t mark their own homework, or cannot test things. It may be that they have less experience, but we can help them to test and explore. If we don’t champion getting developers involved in testing then we create a bottleneck through a single point of failure and cause problems in getting things done in the sprint. Help developers understand how to become better testers by pairing with them and sharing your insights and knowledge on what are good things to explore.
In an agile delivery team quality is everybody’s responsibility. Getting something from to do to done falls on every member of the team and that means your job can involve testing. We have to champion testing within teams and remind people that we may need developer support to pick up testing (and that testers are not responsible for doing all the testing in a team).
It’s worth reminding teams that just because risks have been raised, it doesn't mean we have to test for them for something to be good enough and moved to done. The idea of exploratory testing is that it gives us more insights to make a decision on whether something is finished; it doesn’t block work being finished or blocked. Tell your team that if there’s no useful information that’d come out of that exploration, that we don’t have to do it.
If you’re interested in this piece you can follow my blog where I discuss practical agile testing tips and tricks. Alternatively if you’d like to learn more about exploratory testing you can watch my talk on modern exploratory testing.