7 Principles of Testing - Part 1
7 Principles of Testing - Part 1
Principle 1. Testing shows presence of defects
Testing can show that defects are present, but cannot prove that there are no defects.
It is largely difficult to prove that something does not exist. However many white swans we see, we cannot say ‘All swans are white’. However, as soon as we see one black swan we can say 'Not all swans are white'.
In the same way, however many tests you execute without finding a bug, you cannot say that there are no other tests that could find a bug. As soon as we find at least one bug, we can say 'This code is not bug-free'.
Nevertheless, it does not mean that testing is useless and cannot improve our level of confidence in the code quality. Testing reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, it is not a proof of correctness.
Principle 2. Exhaustive testing is impossible
This principle is connected with the question: “How much testing should we do?”
What answers are there to this question? We have a choice: test everything, test nothing or test some of the software. An ideal response to that may well be to say, 'Everything must be tested'. But what we should consider is whether we must, or even can, test completely.
How many tests would you need to do to completely test a one-digit numeric field? It depends on what you mean by complete testing. There are 10 possible valid numeric values but as well as the valid values we need to ensure that all the invalid values are rejected. There are 26 uppercase alpha characters, 26 lower case, and at least 6 punctuation characters as well as a blank value. So there would be at least 68 tests, and that without mentioning special characters, etc.
This problem just gets worse as we look at more realistic examples. In practice, systems have more than one input field with the fields being of varying sizes. These tests would be alongside others such as running the tests in different environments. If we take an example where one screen has 15 input fields, each having 5 possible values, then to test all of the valid input value combinations you would need 30 517 578 125 (515) tests! It is unlikely that the project timescales would allow for this number of tests.
There is a fairy tale about a magic pot that cooked on and on until the porridge filled an entire town? So our “little pot” can cook and cook for a very long time, almost endlessly, and at some moment you must say, “Little pot, stop!”
By using the equivalence partitioning technique we can reduce the number of texts, as testing the input field having one numeric value with the values 2, 3, and 4 would not give us any more information than testing the field with one value - 3 only for example.
In real life, pressures on a project include time and budget as well as a pressure to deliver a technical solution that meets the customers' needs. Customers and project managers will want to spend an amount of time on testing that provides a return on investment (ROI) for them (time is money). This includes preventing failures after release that are always costly. Testing completely – even if that is what customers and project managers ask for – is simply not what they can afford.
Instead of trying to “test everything”, we need a test approach (strategy) which provides the right amount of testing for this project, these customers (and other stakeholders) and this software. Deciding how much testing is enough should take account of the level of risk, including technical and business risks related to the product and project constraints such as time and budget. Risk assessment and management is one of the most important activities in any project. It enables us to vary the testing effort based on the level of risk in different areas.
Additionally, testing should provide sufficient information to stakeholders to make informed decisions about the release of the product or handover to customers.
In the second part of the article we will look at the next two principles of testing.
Consultant on Software Testing