In the first part of this article
we talked about the relationship between testers and programmers and explored various strategies to improve the way they communicate and offer feedback, mostly in terms of soft skills. In this second part we’ll look at tips and tricks for improving the technical aspect of the feedback.
Misunderstandings and even conflicts about some bugs often occur between testers and programmers. Sometimes programmers reject a defect or grudge about its description. It is now the testers’ turn to take criticisms adequately and constructively. Let’s have a look at some typical situations.
“It’s not a bug, it’s a feature”
BUGSPHEMY – SOME INSULTING OR DISRESPECTFUL ACTION, WORD, OR INTENTION WITH REGARD TO A BUG THAT YOU DESCRIBED.
Probably every tester has faced such a situation. You believe it is a bug but the developer does not think so. What should you do? Will you agree or argue?
Perhaps you have not provided enough evidence to prove that there is a bug. Think about what exactly makes you believe it is a bug. Go back to your test basis, reread the requirements, and look at the traceability matrix. Maybe, after reading everything carefully, you will agree with the programmer. If you still disagree, try to insist on your point of view, adding more arguments.
Yet, it sometimes happens that the programmer and the tester have read the same documents but understood them differently. The question remains: who understood correctly, and who misunderstood it. Both might be wrong. This should be clarified in live communication. If no agreement could be reached, involve third parties – the business analysts, project manager, or any other person whose authority you both recognize and whose decision you are ready to accept.
Any tester can’t stand the situation when the bug is rejected with a resolution like that. It always requires further examination. For you, it might be a signal that the problem lies in the environment.
Possibly, the developer checked some other version of the product to which changes have been made. This should be clarified.
It is also possible that you failed to describe all the steps in enough detail or forgot to mention some precondition. Or the bug is not always reproducible but you have not noticed that. This is another situation that nobody likes…
Nevertheless, you need to find a way out. But the argument “It works on my machine” is too weak and cannot serve as an excuse not to deal with the bug. If it is consistently reproduced in the test version and in your testing environment, you’ll have to insist that the developer spend some time on it. You should add some evidence, like screenshots and logs. In some more complicated instances, you may need extended logging.
A Bad Description of a Defect
A bad description of a defect may well be a reason for a conflict about the bug. The point “expected results” is often omitted. The developer looks at the specification and does not understand why that is a bug. And what is correct? What have you expected? Always add that point, even if it seems trivial and “goes without saying”. It won’t take much time but can help to avoid unnecessary debates.
An excessively high priority or severity assigned to a bug can spark a lot of anger in the developers. When programmers get a bug with “severity: critical”, they may leave everything aside and start looking into the problem. If it turns out that the bug severity is even far from “major”, the programmer’s irritation and anger is quite understandable: they have been distracted because of a trifle. Being honest is very important in the tester’s job. Do not overestimate the severity of bugs! But you should not downstate it either, otherwise a critical problem may be neglected.
The best thing is to have a generally accepted bug severity classification agreed between the testers and developers. You can use a standard system or design your own, the only thing that matters is that all agree to it. And of course all testers must know and be able to apply it.
Reproducibility Is Not Specified
The developers may also grumble about the bug description when it is not indicated that the bug is not always reproducible. This would be especially gruesome if the bug’s priority is very high.
“Reproducibility”, as a standard field in the defect description, is not present in all bug-tracking systems. It is there in Mantis, for example. There you can indicate one of the following options:
Have not tried
Unable to duplicate
Perhaps it is worth adding such an optional field to your bug tracker. Anyway, even if you do not have a separate field, indicate bug reproducibility in the description if it is different from “always”.
Another option is to temporarily assign the bug to yourself and continue testing until you manage to find some regularity and/or gather enough information about the bug’s behavior.
Consultant on Software Testing