среда, 17 января 2018 г.

Lots of testing without defects found. Good? Bad? To be improved?



I've wrote this after having one more (who knows what's the count) talk with colleagues and managers about testers' purpose. Here's what I think:

Testers and - even worse - non-testers around, e.g. managers, tend to forget that testing is not only looking for bugs, but also its a verification that some parts of a system have no defects.
 
Case to discuss: tester spent 90% of time and found nothing but the least 10% gave her/him showstoppers and multiple defects. What does it mean?

Sometimes in this case I hear that testing was wrong and those 90% of time was redundant, just wasting of time..
Especially these talks can be escalated if something (maybe really severe or absolutely not) was found by somebody in production.
Or by managers who just had a look and caught an interesting issue. "Oh guys! Are you testing something there at all or what?" 
Sure we do :)
 
One of the best managers I used to work with usually asks his testers: "what is the level of your confidence that the system is good to go?"
Ha!? That is the clue for 90% of time spent on testing when no bugs are found! These 90% raise the level of confidence in a product. That what it means.
But lets come back to the case to discuss in the beginning. Sure we should be fair here and go through an analysis. There is always place to improve the situation:
  1. Were all of those 90% really necessary or the next time something can be skipped?
  2. Consider better using of already used test design technics or involving of those which are not used for now;
  3. Extend covering of testing scope by automation tests. If you have no automation on the project - maybe it's the best time to start it.
  4. Unit test go here too, they could help you to avoid really amazing sets of end-to-end tests.
  5. How can we shift finding those bugs from 10% of time to the earliest possible point of time in testing?
  6. More and better communication with developers usually helps to reveal where defects could be;
  7. Ask your architects and developers about the level of risk of particular changes or bug-fixes;
  8. Always test new features or improvements first. Even if they seam to be very simple;
  9. Try to check and improve overall development ==>> testing process. Maybe something can be split into pieces and thus go faster to Ready for QA. Or maybe some parts of processes are just a legacy bureaucracy and are to be excluded.
  10. Refresh 7 basic principles of testing in your memory. Try to use all of them.
  11. Haven't we lost something really important because of those 90%?
  12. Even if you went to production and everything seems to be working fine, constantly gather a feedback from a team, users and Customers if something was missed or not. In a good communication atmosphere you should receive a fair and emotionless feedback constantly and immediately. But who know if it's true in you situation. Simply ask more.
  13. P - stands for Prioritization of the project parts here. Try to slice your project into pieces by a risk level:
    --- aha changes in this part are very risky and can have a very big impact (e.g. pieces with legacy code on old architecture, or some things which were not properly tested by some reason);
    --- and if something is changed here nothing extraordinary should happen (that could be true for something recently developed and well known, as well as for some parts which were properly tested, well documented and got a great feedback from all members of process).
Well, that's it from the top of my head regarding correlation of time spent on no-defect-testing and full-strike tiny moments :)

Комментариев нет:

Отправить комментарий