My manager recently said she hates raking leaves because as soon as she rakes her yard, she turns around and there are more leaves to rake.
I immediately thought…weird, that’s exactly what testing feels like.
We get a build. It’s full of bugs. We work hard all day logging bugs. By the time we have a chance to turn around and admire our working AUT, we’ve gotten a new build and the bugs are back. So we get out the rake and start all over.
If I’m short on time, sometimes I just rake the important parts of the yard. If I’m not sure where to start, I usually look under the large Oak trees; I’ve noticed fewer leaves under the Loblolly Pines. If I’m expecting a windy day, I usually wait until the next day to rake, allowing the fallen leaves to accumulate. Sometimes, I find an obscure leaf…I have to ask my wife if I should rake it or leave it there. If I get done early, I might rake out some of those leaves from last season, from the garden bed on the side of the house.
It’s exhausting, really. But somebody’s got to keep the yard clean.
Labels: testing metaphor
Two days ago I logged a slam dunk bug. It was easy to understand so I cut corners; skipping the screen capture and additional details.
Yesterday a dev rejected said bug! After re-reading the repro steps, I decided the dev was a fool. However, a brief conversation revealed that even a semi-intelligent being (like a dev) could justifiably get confused. That’s when it hit me. If someone doesn’t understand my bug, it’s my fault.
Most testers experience the AUT via the UI and what seems obvious to the tester may not be obvious to the dev.
So if the dev is confused about your bug, it’s your fault. Graciously apologize and remove the ambiguity. Remember, we’re not dealing with people. We’re dealing with devs.
Another of Robert Sabourin’s STPCon sessions I attended was “Deciding What Not to Test”.
Robert did not give me the magic, simple answer, I was hoping for. His main point was, it’s better to not test certain things because you decided not to, rather than because you ran out of time.
I agree but I’m not convinced this is practical. In order to decide what not to test, one must spend time determining lots of extra tests that will be thrown out. I don’t like that. If there are an infinite number of tests to execute, when do I stop coming up with them? Instead, I prefer to come up with them as I test within my given testing time. My stopping point then becomes the clock. I would like to think, as a good tester, I will come up with the best tests first. Get it?
Maybe “which tests should I NOT execute?” is the wrong question. Maybe the better question is “which tests should I execute now?”.
At any rate, it is feasible that a tester often finds themselves in a situation where they have too many tests to execute in the available time, er…time they are willing to work. When this situation arises, Robert suggests a few questions to help prioritize:
1.) What is the risk of failure?
2.) What is the consequence of failure?
3.) What is the value of success?
Here are my interpretations of Robert’s three questions:
1.) A tester can answer this. Does this dev usually create bugs with similar features? How complex is this feature? How detailed were the specs or how likely is it that the correct info was communicated to the dev?
2.) It should be answered from the stakeholder’s perspective. Although, a good tester can answer it as well. It all comes down to this question: will the company lose money?
3.) This one should also be answered from the stakeholder’s perspective. If this test passes, who cares? Will someone be relieved that the test passed?
So if you answer “high” to any of Robert’s three questions, I would say you had better execute the test.
Do you have any better advice on knowing what not to test? If so, please share!
Labels: writing tests