Teams often find themselves managing a big number of test cases, many of which are nearly identical except for variations in input data. This redundancy leads to a higher risk of missing critical test coverage.
Test parameterization solves this challenge by allowing testers to re-run the same test logic with multiple data iterations - teams define parameters within a single test script and execute it dynamically via a dataset with different values.
In this blog post, you’ll find key takeaways from Xray’s recent webinar on test parameterization. You’ll learn what it is, how it relates to data-driven testing (DDT), and how to implement it effectively using Xray and Xray Enterprise. If you want to dive deeper, you also have access to the full webinar recording.
DDT separates the test logic from the test data, enabling the same test script to be executed with different sets of data inputs. This approach allows testers to verify an application’s behavior across multiple scenarios without having to manually create individual test cases for each data variation.
The parameterization stage involves introducing variables, or parameters, into the test scripts that will later be replaced with actual values from a dataset.
For example, the dataset for a login test could contain:
In this case, the username and password parameters in the test script will be replaced by the values in the data table, and the test will be executed twice—once with each set of inputs.
Here are some of the common challenges associated with parameterized testing:
It can be challenging to track individual test iterations when the tests are executed with different data. If all iterations appear as a single test case in the reporting tools, it can be hard to isolate and troubleshoot specific failures.
The more complex the test, the harder it may be to maintain clarity across different data iterations. Without clear documentation or structure, developers and testers may struggle to understand how the parameterized test relates to each specific variation.
Complex data types or interdependent parameters can make parameterization tricky. These data types may not fit neatly into a tabular format, and it can be hard to represent them in a way that maintains test reliability and readability.
Managing dependencies between parameters requires ensuring that changes to one parameter don’t break the logic of another. This becomes problematic in large-scale testing environments where multiple dependencies are involved.
Parameterization in Xray starts with configuring the "Parameter Value Lists" in the settings. You can choose between Project-level or Global-level settings. If you’re working with a single dedicated testing project, project-level settings are usually enough. However, if you manage multiple testing projects with overlapping scopes, global settings can help maintain consistency and save time.
Datasets store test iterations and feed parameters into test cases. Xray provides 3 ways to create datasets: manual specification, combinatorial generation, and importing existing datasets. You can also mix parameter types within the same dataset or switch between them as needed.
No matter how you create your dataset, you can always adjust values, add or remove rows, and refine iterations.
This is where Test Case Designer (TCD) helps. TCD is a model-based test design tool that optimizes test case creation by improving interaction coverage while reducing redundancy. It helps testers identify defects faster and ensures more efficient test execution.
Typically, teams analyze requirements in Jira, design models in TCD, and then move scripted tests to Xray or an automation framework. Xray remains the source of truth for test results. Engaging TCD early in the sprint, using visualizations like mind maps, improves collaboration and prevents ambiguities before testing begins.
However, TCD is pretty versatile connectivity-wise, so different workflows are possible, albeit less common. You can use it just for dataset generation, letting the TCD algorithm optimize the iterations for you, then export the output in CSV or another format and assign it to your parameterized script in Xray or in an automation framework.
Some of the key advantages of Test Case Designer are:
Xray Enterprise’s Test Case Designer offers a visual interface that simplifies the process of test case creation. Users can easily define parameters and values, rules, as well as scripts without needing to write complex code. This improves collaboration across teams, making test case creation more accessible and less time-consuming.
You can let the combinatorial algorithm of Test Case Designer do heavy lifting. Select one of several “coverage goal” settings, including risk-based, so that you can quickly adjust the number of scenarios based on the desired balance between quality/thoroughness and execution speed.
This feature helps you make sure the tests remain valid based on your scope and business logic, no matter how many times you need to regenerate the dataset. You specify the rules via intuitive when-then statements, which can include dependencies across any number of parameters. Advanced rule handling speeds up the creation and maintenance of test cases, ensures consistency, and reduces the chance of errors from manual value replacements.
Mind map, Coverage Matrix, and Coverage Graph enable you to better communicate ideas and make informed decisions about how much testing is enough.
When comparing Xray and TCD, both offer key data-driven capabilities like parameter handling and script integration. However, Xray provides more flexibility in manual scripting, while TCD excels in algorithmic dataset creation - with programmatic rule handling, flexible combinatorial options, and interaction coverage visualization. Capacity limits also differ, with TCD supporting significantly higher numbers of iterations and parameters.
Curious to try Test Case Designer?