Whether writing manual or automated tests you may have asked yourself how much stuff you should include in each test. Sometimes you may write tests with multiple steps that look like this…

Test #1
Step 1 - Do A. Expect B.
Step 2 - Do C. Expect D.
Step 3 - Do E. Expect F.

Or instead, you may write three separate one step tests…

Test #2
Step 1 - Do A. Expect B.

Test #3
Step 1 - Do C. Expect D.

Test #4
Step 1 - Do E. Expect F.

Finally, you may even do this…

Test #5
Step 1 - Do A. Do C. Do E. Expect F.

Do you see an advantage or disadvantage to any of these three scenarios?

Well, I’m sure if you give us an unlimited amount of time we can get you the exact minimum repro steps necessary to consistently reproduce this bug. However, after a reasonable attempt, we can’t figure them out. What do we do? Here is what I think…

Always err on the side of logging too many bugs. Log your best repro steps guess, any other conditions that may be relevant, and add a note that the bug is only triggered sometimes. If the right dev gets the right clues she may be able to crack it. If not, we can resolve it to a “No Repro” status and hope more information will lead to its resolution later. On a previous project we resolved these as “Phantom Bugs”, which seemed kind of fun to me.

I’ve noticed great value in the ability to reference a phantom bug with a BugID. Bugs without IDs are not really bugs. Instead, they just get vague names and eventually become lost in a sea of email threads that morph into other issues.

What do you think?

Management wants to know the state of the AUT but they don’t really know what questions to ask. Worse yet, when they do ask…

  • How does the build look?
  • How much testing is left?
  • What are the major problems?
…I don’t know how to provide the simple answer they want. Well, my team has been using a successful little trick that is super easy to implement.

We listed our modules on an old fashioned white board in an area that gets people traffic. Every three weeks, on build day, we run the smoke tests on the new build and if all tests for a given module pass, the module gets a little green smiley face drawn next to it. If any tests for a given module fail (to the extent said module is unaccepted), the module gets a sad red face. Finally, if any tests for a given module fail but we can work around the problems and accept the module, we draw a blue straight face.
The white board slowly gets updated throughout the course of the day by me and my other QA colleagues as we complete smoke tests for each module. Satish looks happy in this picture because we were having a good build day. The various managers and dev leads naturally walk past the white board throughout the day and have instant knowledge of the state of the build. It shields QA from having to constantly answer questions. Instead, we hear fun remarks like “Looks like you finally got a big smiley on System Admin Server, Stephanie, that’s a relief!” or “What’s up with all the red sad faces on your server solutions, Rob? ”.

Our build day white board was inspired by James Bach’s Low-Tech Dashboard, which contains some really cool ideas, some of which my team will experiment with soon. Michael Bolton introduced this to me in his excellent Rapid Software Testing class. Bach’s Low-Tech Dashboard is more complex but in exchange, it fends off even more inquisitive managers.

If your company is obsessed with portals, gantt charts, spreadsheets, test case/defect reports, and e-mails, drawing smiley faces on a white board may be a refreshing alternative that will require less administrative work than its high-tech alternatives.

This tired question has several variations and here is what I think.

If the question is, “Am I done testing this AUT?” the answer is, of course, no. There are an infinite number of tests to execute, so there is no such thing as finishing early in testing. Sorry. You should still be executing tests as your manager takes away your computer and rips you from your cubicle in an effort to stop you from logging your next bug. Or as Ben Simo’s 12 year old daughter puts it, you’re not done testing until you die or get really tired of it.

The more realistic question, “Is it time to stop testing this AUT?” probably depends on today’s date. We do the best we can within the time we are given. Some outside constraint (e.g., project completion date, ship date, you are reassigned to a different role) probably provides your hard stop. The decision of when to stop testing is not left up to the tester, although your feedback was probably considered early on… when nothing was really known about the AUT.

Finally, the question, “Am I done testing this feature?” is much more interesting and valuable to the tester. Assuming your AUT has multiple features that are ready for testing, you’ll want to pace yourself so attention is given to all features, or at least all important features. This is a balancing game because too much time spent on any given feature may cause neglect on others. I like to use two heuristics to guide me.

Popcorn Heuristic – I heard this one at Michael Bolton’s excellent Rapid Software Testing class. How do we know when a bag of microwave popcorn is finished popping? Bugs are discovered in much the same way. We poke around and start finding a few bugs. We look a little deeper and suddenly bugs are popping up like crazy. Finally, they start getting harder to find. Stop the microwave. Move on to the next feature, we’re done here!

Straw House Heuristic – I picked up this one from Adam White’s blog. Don’t use a tornado to hit a straw house. When we first begin testing a new feature, we should poke at it with some simple sanity tests and see how it does. If it can’t stand up against the simple stuff, we may be wasting our time with further testing. It’s hard to resist the joy of flooding the bug tracking system with easy bugs but our skills would be wasted, right? Make sure your devs agree with your assessment that said feature is not quite ready for your tornado, log some high level bugs, and ask them if they’ll build you a stone house. Then move on to the next feature, we’re done here!

What do you think?



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.