Well…yes. I would.
The most prolific bug finder on my team is struggling with this question. The less the team decides to fix her bugs, the less interested she grows in reporting them. Can you relate?
There is little satisfaction reporting bugs that nobody wants to hear about or fix. In fact, it can be quite frustrating. Nevertheless, when our stakeholders choose not to fix certain classes of bugs, they are sending us a message about what is important to them right now. And as my friend and mentor, Michael Bolton likes to say:
If they decide not to fix my bug, it means one of two things:
- Either I’m not explaining the bug well enough for them to understand its impact,
- or it’s not important enough for them to fix.
So as long as you’re practicing good bug advocacy, it must be the second bullet above. And IMO, the customer is always right.
Nevertheless, we are testers. It is our job to report bugs despite adversity. If we report 10 for every 1 that gets fixed, so be it. We should not take this personally. However, we may want to:
- Adjust our testing as we learn more about what our stakeholders really care about.
- Determine a non-traditional method of informing our team/stakeholders our bugs.
- Individual bug reports are expense because they slowly suck everyone’s time as they flow through or sit in the bug repository. We wouldn’t want to knowingly start filling our bug report repository with bugs that won’t be fixed.
- One approach would be a verbal debrief with the team/stakeholders after testing sessions. Your testing notes should have enough information to explain the bugs.
- Another approach could be a “supper bug report”; one bug report that lists several bugs. Any deemed important can get fixed or spun off into separate bug reports if you like.
It’s a cliché, I know. But it really gave me pause when I heard Jeff “Cheezy” Morgan say it during his excellent STAReast track session, “Android Mobile Testing: Right Before Your Eyes”. He said something like, “instead of looking for bugs, why not focus on preventing them?”.
Cheezy demonstrated Acceptance Test Driven Development (ATDD) by giving a live demo, writing Ruby tests via Cucumber, for product code that didn’t exist. The tests failed until David Shah, Cheezy’s programmer, wrote the product code to make them pass.
(Actually, the tests never passed, which they later blamed on incompatible Ruby versions…ouch. But I’ll give these two guys the benefit of the doubt. )
Now back to my blog post title. I find this mindshift appealing for several reasons, some of which Cheezy pointed out and some of which he did not:
- Per Cheezy’s rough estimate 8/10 bugs involve the UI. There is tremendous benefit to the programmer knowing about these UI bugs while the programmer is writing the UI initially. Thus, why not have our testers begin performing exploratory testing before the Story is code complete?
- Programmers are often incentivized to get something code completed so the testers can have it (and so the programmers can work on the next thing). What if we could convince programmers it’s not code complete until it’s tested?
- Maybe the best time to review a Story is when the team is actually about to start working on it; not at the beginning of a Sprint. And what do we mean when we say the team is actually about to start working on it?
- First we (Tester, Programmer, Business Analyst) write a bunch of acceptance tests.
- Then, we start writing code as we start executing those tests.
- Yes, this is ATDD, but I don’t think automation is as important as the consultants say. More on that in a future post.
- Logging bugs is soooooo time consuming and can lead to dysfunction. The bug reports have to be managed and routed appropriately. People can’t help but count them and use them as measurements for something…success or failure. If we are doing bug prevention, we never need to create bug reports.
Okay, I’m starting to bore myself, so I’ll stop. Next time I want to explore Manual ATDD.