Showstopper”. This means the show cannot go on. I guess the full metaphor is something like… the star of the show has lost her voice! The show must be stopped until her understudy has been located…or in our case, until the bug is fixed.

I’ve always hated the label “Showstopper”. I tried to convince my previous manager not to use it. I half-seriously argued it was a theater metaphor and theater metaphors don’t get used for other bug priorities. Well, if some insist upon using this theater metaphor, perhaps we should incorporate other theater metaphors into software testing and development.

  • Maybe we should classify our second priority bugs as “Technical Difficulties” (i.e., the light board blew a fuse but the stage crew determines they can use the house lights to keep the performance going…a workaround.)
  • The third priority bugs would be called “Missed Lines” (i.e., an actor forgot a line but the other actors easily improvise and no critical story essentials are missing.)
  • And finally, “Mediocre Set Design” (i.e., the set is barebones and unconvincing but with a little imagination, the audience can still enjoy the story.)
And why stop with just bug priorities...
  • Instead of the User Acceptance Test phase we should call it “Dress Rehearsal”.
  • Opening Night” is the night we deploy a new release to production.
  • When our users open Task Manager to force quit our app, they are “Cutting Out At Intermission”.
  • When the tester gets a perpetual hour-glass, the devs can say the feature got “Stage Fright”.
  • We can make our open bug list public and call it “Breaking the fourth wall”.
  • As CM kicks off the build, we’ll remind them to “Break a Leg”.
  • If our users ask for more features, we’ll bow and consider it a “Standing Ovation”.
  • And our dev who is always throwing in those extra features nobody asked for, they can be the team “Prima Donna” or “Divo”.
  • And finally, if our load testing was done poorly, we may end up with long lines of people waiting to use theater bathrooms. Some of these queues may get quite long. Eventually, people may wait so long they time out….er…"Pee Their Pants”.

So you’ve got 10 new features to test in about 25% of the time you asked for…just another day in the life of a tester. How do you approach this effort?

Here is how I approach it.

  • First, I sift through the 10 features and pick out the one that will have the most critical bugs (call it FeatureA). I test FeatureA and log two or three critical bugs.
  • Next, I drop FeatureA and repeat the above for the feature that will have the next most critical bugs (call it FeatureB). I know FeatureA has undiscovered bugs. But I also know FeatureA’s critical bug fixes will trigger FeatureA testing all over again. I also assume some non-discovered FeatureA bugs will be indirectly fixed by the first batch of bug fixes. I am careful not to waste time logging “follow-on-bugs”.
  • When bug fixes are released, I ignore them. I repeat the above until I have tested all 10 new Features with the first pass.
  • At this point something important has occurred. The devs and BAs know the general state of what they are most interested in.
  • Finally, I repeat the above with additional passes, verifying bug fixes with each feature. As the features gradually become verified I communicate this to the team by giving the features a status of “Verified”. I use my remaining time to dig deeper on the weak features.

Okay, nothing breakthrough here, but there are two tricks that should stand out in the above.

Trick 1 – Don’t spend too much time on individual features in the first pass. You want to provide the best info to your devs as early as possible for all 10 Features. It’s way too easy to run out of time by picking one Feature clean.

Trick 2 – Ignore those bug fixes until you get through your first pass with all 10 Features. I know it’s hard. You’re so anxious to see the damn thing fixed. However, IMO, the unknowns of untested Features are more valuable to chase down than the unknowns of whether bugs are fixed. In my experiences, when I log bugs well, verifying them is a brainless rubber-stamping-activity.

How do you get the job done?

An important bug got rejected by dev today. It was my fault.

I included an incorrect solution to the problem. Rather than describing the bug and calling it quits, I went further and described (what I believed to be) the right solution. The dev rejected it because my solution was flawed. The dev was correct…a bit lazy perhaps, but correct.

The main purpose of a bug is to identify a problem, not to specify a solution. I think it’s okay for testers to offer suggested solutions but they should be careful how they word the bug.

For example, if tester logs this…

Expected Results: A
Actual Results: B because D needs to be used to determine E, F, and G. Please modify the operation to use D when determining E, F, and G.


Dev may read it and think, modifying the operation to use D will not work…I’ll have to reject this bug. ….um, what about the problem?

A better bug would have been the following:

Expected Results: A
Actual Results: B


Let the dev figure out how to get to A. If you have repro steps and other suitable details, the dev will probably know how to fix it. If they don’t, they know who to ask for assistance. It may even be the tester!

Am I right?

After reading Tobias Mayer’s Test(osterone)-infected Developers, I noticed my test team has 3 men and 8 women, while my dev team has 30 men and 2 women. This is a small sample but I agree with Tobias that it is the norm.

Are she-testers better testers or just more interested in testing? This is a tired blogosphere discussion but a more interesting question is:


Do she-testers have unique skills worth harnessing?


My answer is, yes. I think women have at least one powerful advantage over men when it comes to testing. They are arguably better at observing subtle clues.

Most differences between men and women can be understood by noting their strongest biological roles. Women have babies! Thus, women are wired to pay attention to their babies and identify problems based on subtle expressions or behavior changes (e.g., baby is sick). I've heard women are better than men at determining if someone is lying, based on the same biological reasons.

Yesterday, while observing a premature field population UI bug, a she-tester on my team noticed the larger problem (that I missed). Previously populated data was getting erased. Of course, this may have just been a case of “two heads are better than one”, but my she-testers always impress with their subtle observations.

What differences have you observed between men and women testers? Can we use these differences to build a better test team?



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.