Details
-
Suggestion
-
Status: Shipped
-
Resolution: Done
-
None
-
None
Description
Test parameterization is a powerful practice that allows the same test to be executed multiple times with different parameters. Parameters are similar to input values (variables) that can change with each execution.
Without the ability to define parameterized tests, all values must be hard-coded into the specification, which makes the test static and difficult to modify. Static tests lead to a lot of redundancy since you need to define and clone the same test for the different combinations of values needed to cover different scenarios or variations of the same test.
Parameterized Tests
Users must be able to define parameters at the Test level in a dataset. A Parameter has the following attributes:
- Name
- Type: [Open, List]
Parameters can be referenced with the notation ${NAME} on the Test Steps (native or custom text fields):
In order to manage parameters within a Test case, users must be able to manage the dataset associated with the test.
List Parameters
List Parameters contain a predefined set of options. This is similar to a single select field. For each parameter of type List users can only set values from the predefined options.
List Parameter Example:
Name: Role Type: List Options: [Admin, User, Guest, Project Admin]
List Parameters can be created in the following levels:
- Global settings
- Project settings
- Ad hoc for each parameter
Consequently, List Parameters can be reused across multiple parameterized Tests.
Parameterized Preconditions
Precondition issues can also be parameterized.
Parameters can not be created on preconditions directly. However, it is possible to reference parameters within the precondition definition field by name.
The parameters will be unfolded on the execution screen, just like Test steps. For this, the dataset associated with the execution must have the same parameters, matched by name.
Data-driven testing
Data-driven testing is a methodology in which a table of input and output variables is used to configure a Test case. In this case, the Test is generic in the sense that it can be executed with multiple configurations.
Xray must support data-driven testing by providing the ability to manage and import datasets for a given parametrized Test case. The dataset input and output variables will effectively be the parameters referenced within Test steps or preconditions.
Datasets
Definition
A dataset is a table of input and output variables for data-driven testing.
Example of a dataset with a list of email and password to validate a Login Test:
Password | Valid | |
---|---|---|
mfburgo@gmail.com | 123456789 | true |
enintend@outlook.com | qwerty | false |
overbom@icloud.com | password | false |
kronvold@gmail.com | 1234567 | true |
marcs@msn.com | 12345678 | true |
aardo@mac.com | 12345 | false |
gknauss@att.net | iloveyou | false |
Xray will provide the ability to define datasets through the UI by allowing users to create columns and rows manually or import datasets from external CSV files.
Datasets can be defined in the following levels/entities:
- Test (default)
- Test Plan > Test
- Test Execution > Test (test run)
The closest dataset to the Test Run will be used to generate the iterations effectively overriding any dataset defined in higher levels:
Test Execution - Test (Test Run) > Test Plan - Test > Test (default)
The Test dataset is the default. If there is the need to override or change this dataset (keeping the original dataset on the Test), users can do this at the planning or execution.
Combinatorial parameters
Xray must provide a feature to generate all combinations automatically for a dataset. When defining a dataset, users must be able to define combinatorial columns. When the test is executed, all iterations (from all combinations) will be generated automatically by Xray.
In the following dataset, Valid and *Role* are combinatorial columns:
Password | Valid* | Role* | |
---|---|---|---|
mfburgo@gmail.com | 123456789 | true | Admin |
enintend@outlook.com | qwerty | false | User |
overbom@icloud.com | password | Guest |
Xray Will generate the following iterations:
Password | Valid | Role | |
---|---|---|---|
mfburgo@gmail.com | 123456789 | true | Admin |
mfburgo@gmail.com | 123456789 | true | User |
mfburgo@gmail.com | 123456789 | true | Guest |
mfburgo@gmail.com | 123456789 | false | Admin |
mfburgo@gmail.com | 123456789 | false | User |
mfburgo@gmail.com | 123456789 | false | Guest |
enintend@outlook.com | qwerty | true | Admin |
enintend@outlook.com | qwerty | true | User |
enintend@outlook.com | qwerty | true | Guest |
enintend@outlook.com | qwerty | false | Admin |
enintend@outlook.com | qwerty | false | User |
enintend@outlook.com | qwerty | false | Guest |
overbom@icloud.com | password | true | Admin |
overbom@icloud.com | password | true | User |
overbom@icloud.com | password | true | Guest |
overbom@icloud.com | password | false | Admin |
overbom@icloud.com | password | false | User |
overbom@icloud.com | password | false | Guest |
Given that the number of combinations can be huge, Xray must warn the user and limit the number of combinations that can be generated.
Dataset resolution
At the Test Run level, users will see the Test specification with the resolved configuration. The rule to choose the Test configuration is as follows:
Given a Test Run:
If there is a TC defined in the Test Run (Test Execution individual context), then this TC must be used
Else if there is a TC defined in the Test Plan (individual context), then this TC must be used
Else use the default TC defined in the Test, if any.
Test Run -> Test Plan Test → Test
When executing a parametrized Test, the execution screen must provide the Test Run with the resolved dataset.
If a parameter can not be resolved, the parameter value must be displayed with ${parameter name}.
Execution
When executing a parametrized Test, the execution screen must provide the Test Run with the resolved dataset.
In the case of a data-driven Test, users must be able to execute all unfolded Tests for each dataset row on the execution screen.
The iterations must be displayed along with their status within a panel named "Iterations". This panel is expanded by default. However, individual iterations are collapsed by default. Expanding an iteration will reveal all the steps with the parameter names replaced by the iteration values.
Users must be able to execute iterations directly. When an iteration is executed, the overall test run status must also be updated.
The new iterations panel must include a progress bar to display the status for all iterations.
For test runs that only contain a single iteration, the iterations panel must not be displayed. Instead, users must be able to view the steps already expanded, and a new panel called "Parameters" must be displayed with the parameters and their values.
The overall execution status for a data-driven Test Run will be calculated based on the partial statuses for each iteration (table row) result using the same rules implemented already by Xray. This means that if one iteration result is FAILED, then the aggregated status for Test Run will also be FAILED.
How changes to datasets affect existing executions
When a Test Run is created, the dataset is resolved and copied to the Test Run entity. Hence, changes to datasets will not affect existing Test Runs. This allows past executions to keep an accurate record of the exact specification and the dataset that was executed.
Reporting
Although Xray provides the aggregated result at the Test Run level, considering the results from all iterations, Xray must provide progress bars to display the status of the inner progress of Test Runs (just like with Test Steps). Users will see how many iterations are passing, failing, or any other status configured in Xray.
It will not be possible to calculate the TestRunStatus or Requirement Status based on parameter values. For this Xray already provides Test Environments. These work just like Test Parameters and they can be analyzed in TestRunStatus or Requirement Status.
Xray must also provide progress bars that can give the status of iterations within Test Runs on the Test Runs List report and gadget.
Attachments
Issue Links
- is cloned by
-
XRAYCLOUD-2429 Parameterized Tests
- Shipped