Test Design – Readability versus Writeability
Test Design – Readability versus Writeability
The growing interest in test automation is offset by the concern about risk. One of these risks is inadequate test design that can’t be used as the background for automated test scripts development and execution. Keep in mind, however, that the risk of inadequate testing design is a problem with testing as a whole (outside the scope of test automation).
17 серп 2020 1535
Інші статтіHow to incrementally migrate the data from RDBMS to Hadoop using Sqoop Incremental Last Modified technique? How to implement Slowly Changing Dimensions(SCD) Type 2 in Spark? How to incrementally migrate the data from RDBMS to Hadoop using Sqoop Incremental Append technique? Why MongoDB don't fetch all the matching documents for the query fired How to solve the issue of full disk utilization in HDFS Namenode Can We Use HDFS as Back-up Storage? How to do Indexing in MongoDB with Elastic Search? Part 1 How to do Indexing in MongoDB with Elastic Search? Part 2 How to store data on browser using NoSQL IndexedDB? How to Apply MBTI in HR: Motivation for every day. Groups of People & their Motivations
Test Design Risks for Both Manual and Automated TestingThe growing interest in test automation is offset by the concern about risk. One of these risks is inadequate test design that can’t be used as the background for automated test scripts development and execution. Keep in mind, however, that the risk of inadequate testing design is a problem with testing as a whole (outside the scope of test automation).
High quality test design can simplify and speed up the process of the:
- Manual tester (Test execution)
- Automation test engineer (Scripts development and execution)
- Test manager (Manual and automation test result analysis)
Test Design and Requirements
Let’s go over some test design basics. Testing is always performed against requirements which are required to be:
To double-check them all it’s necessary to start test design activities as soon as possible. Requirements are typically changed during project performance. These changes are to be reflected in the test design and test scripts.
Data-driven testing is based on the separation of test steps and test data, allowing for two things:
- Increasing the number of test data (typical action in test automation) without a test script update
- Decreasing the time of the test execution by directed data selection based on equivalence partition, boundary values, pairwise testing, etc.
What is a Test Case?A test case consists of format and content. The basic format is the same everywhere:
- Step #
- Expected result
The Devil is in the DetailsThe main question is “Is the context adequate to the format?”.
Consider the typical test case (Fig. 1):
Fig. 1 The typical test case
- What do the terms “few,” “any,” or “for example” mean?
- How we can double-check the price?
- How we can double-check that it’s not considered a defect when the button is not available?
- How many times do we repeat steps 2-7 and how do we find data for these repetitions?
What is the (Updated) Test Case?We can modify the test case structure to be the following:
- Step # (No changes)
- Action (No changes)
- Input data (All that user can type/select/press/etc.)
- Expected result (All double-checks when the action is performed)
So we get a better test case result (Fig. 2):
Fig. 2 A better test case
Repetitions and WordingSome of the aforementioned issues are now resolved, but steps remain. The next steps are to get rid of the following:
- Constructions like “Repeat steps from … to …” that may cause confusion during manual testing and make the automated test script more complicated and difficult to update (do not forget about requirements changes)
- Words such as “corresponding,” “any,” “appropriate,” etc. – see Figure 3.
Fig. 3 Sample of the use of “corresponding” within the text
Feel the Difference Between Actions and Expected ResultsThe next activity is to separate mixed actions and expected results (Fig. 4).
Fig. 4 Mixed actions and expected results
The rules are:
- All actions (for example, step 12) are placed in the “Action” column.
- All expected results (for example, steps 13 and 14) are placed in “Expected results” column.
This reduces the number of test case steps.
“Le Mieux Est L’ennemi du Bien,” or BIEGE (The Best Is the Enemy of the Good Enough)The next step is to get rid of universal scripts where UI depends on the test data (Fig. 5):
Fig. 5 UI depends on test data
Figure 6 shows another example of universal scripts:
Fig. 6 One more example of universal scripts
There is a universal “Execute action” command on test case step 5 that is executed as “Move 1 element …” for data set GU-01 and as “Move all elements …” for data set GU-02. Nevertheless, it means a mix of test case steps and test data that leads to more complicated manually-tested actions and automated test script code. In this case it may be better to develop more test cases that will be simpler and easier to understand, automate and maintain.
What Test Case is Recommended in Data-Driven Testing?Test Case Format:
- Step #
- Input data (all that the user can type/select/press /etc.)
- Expected result (all checks out when the action is performed)
Test Case Content:
- No explicit cycles – use several data sets instead of
- No unambiguous constructions – use explicit data values or references instead of
- Roles, values
- References to data storage items
- Preconditions (database content)
- SQL queries execution
- Ad hoc test cases
How to Know if You’re On the Right PathCriteria:
- There is a reasonable balance between test case step complexity and test dada complexity
- Sometimes it’s a good idea to develop two or three similar test cases with strictly reduced test data volume
BenefitsThe pluses are:
- Manual testing – An increase of test execution reliability
- Automated testing – Increased understanding of automation testing efficiency and results
Interested in upgrading your skills? Check out our software testing trainings.