Many (if not most) test teams claim to perform test case reviews. The value seems obvious, right? Make sure the tester does not miss anything important. I think this is the conventional wisdom. On my team, the review is performed by a Stakeholder, BA, or Dev.
Valuable? Sure. But how valuable compared to testing itself? Here are the problems I have with Test Case Reviews:
- In order to have a test case review in the first place, one must have test cases. Sometimes I don’t have test cases…
- In order for a non-tester to review my test cases, the test cases must contain extra detail meant to make the test meaningful to non-testers. IMO, detailed test cases are a huge waste of time, and often invaluable or misleading in the end.
- From my experiences, the tests often suggested by non-testers are poorly designed tests or tests already covered by existing tests. This becomes incredibly awkward. If I argue or refuse to add said tests, I look bad. Thus, I often just go through the motions and pretend I executed the poorly conceived tests. This is bad too. Developers are the exception, here. In most cases, they get it.
- Forcing me to formally review my test cases with others is demeaning. Aren’t I getting paid to know how to test something? When I execute or plan my tests, I question the oracles on my own. For the most part, I’m smart enough to know when I don’t understand how to test something. In those cases, I ask. Isn’t that what I’m being paid for?
- Stakeholders, BAs, or Devs hate reading test cases. Zzzzzzzzzz. And I hate asking them to take time out of their busy days to read mine.
- Test Case Reviews subtract from my available test time. If you’ve been reading my blog, you know my strong feelings on this. There are countless activities expected of testers that do not involve operating the product. This, I believe, is partly because testing effectiveness is so difficult to quantify. People would rather track something simple like, was the test case review completed? Yes or No.