We have a recurring conversation on both my project teams. Some Testers, Programmers, BAs believe certain work items are “testable” while others are not. For example, some testers believe a service is not “testable” until its UI component is complete. I’m sure most readers of this blog would disagree.
A more extreme example of a work item, believed by some to not be “testable”, is a work item for Programmer_A to review Programmer_B’s code. However, there are several ways to test that, right?
- Ask Programmer_A if they reviewed Programmer_B’s code. Did they find problems? Did they make suggestions? Did Programmer_B follow coding standards?
- Attend the review session.
- Install tracking software on Programmer_A’s PC that programmatically determines if said code was opened and navigated appropriately for a human to review.
- Ask Programmer_B what feedback they received from Programmer_A.
IMO, everything is testable to some extent. But that doesn’t mean everything should be tested. These are two completely different things. I don’t test everything I can test. I test everything I should test.
I firmly believe a skilled tester should have the freedom to decide which things they will spend time testing and when. In some cases it may make more sense to wait and test the service indirectly via the UI. In some cases it may make sense to verify that a programmer code review has occurred. But said decision should be made by the tester based on their available time and queue of other things to test.
We don’t need no stinkin’ “testable” flag. Everything is testable. Trust the tester.