Whether writing manual or automated tests you may have asked yourself how much stuff you should include in each test. Sometimes you may write tests with multiple steps that look like this…

Test #1
Step 1 - Do A. Expect B.
Step 2 - Do C. Expect D.
Step 3 - Do E. Expect F.

Or instead, you may write three separate one step tests…

Test #2
Step 1 - Do A. Expect B.

Test #3
Step 1 - Do C. Expect D.

Test #4
Step 1 - Do E. Expect F.

Finally, you may even do this…

Test #5
Step 1 - Do A. Do C. Do E. Expect F.

Do you see an advantage or disadvantage to any of these three scenarios?

Well, I’m sure if you give us an unlimited amount of time we can get you the exact minimum repro steps necessary to consistently reproduce this bug. However, after a reasonable attempt, we can’t figure them out. What do we do? Here is what I think…

Always err on the side of logging too many bugs. Log your best repro steps guess, any other conditions that may be relevant, and add a note that the bug is only triggered sometimes. If the right dev gets the right clues she may be able to crack it. If not, we can resolve it to a “No Repro” status and hope more information will lead to its resolution later. On a previous project we resolved these as “Phantom Bugs”, which seemed kind of fun to me.

I’ve noticed great value in the ability to reference a phantom bug with a BugID. Bugs without IDs are not really bugs. Instead, they just get vague names and eventually become lost in a sea of email threads that morph into other issues.

What do you think?

Management wants to know the state of the AUT but they don’t really know what questions to ask. Worse yet, when they do ask…

  • How does the build look?
  • How much testing is left?
  • What are the major problems?
…I don’t know how to provide the simple answer they want. Well, my team has been using a successful little trick that is super easy to implement.

We listed our modules on an old fashioned white board in an area that gets people traffic. Every three weeks, on build day, we run the smoke tests on the new build and if all tests for a given module pass, the module gets a little green smiley face drawn next to it. If any tests for a given module fail (to the extent said module is unaccepted), the module gets a sad red face. Finally, if any tests for a given module fail but we can work around the problems and accept the module, we draw a blue straight face.
The white board slowly gets updated throughout the course of the day by me and my other QA colleagues as we complete smoke tests for each module. Satish looks happy in this picture because we were having a good build day. The various managers and dev leads naturally walk past the white board throughout the day and have instant knowledge of the state of the build. It shields QA from having to constantly answer questions. Instead, we hear fun remarks like “Looks like you finally got a big smiley on System Admin Server, Stephanie, that’s a relief!” or “What’s up with all the red sad faces on your server solutions, Rob? ”.

Our build day white board was inspired by James Bach’s Low-Tech Dashboard, which contains some really cool ideas, some of which my team will experiment with soon. Michael Bolton introduced this to me in his excellent Rapid Software Testing class. Bach’s Low-Tech Dashboard is more complex but in exchange, it fends off even more inquisitive managers.

If your company is obsessed with portals, gantt charts, spreadsheets, test case/defect reports, and e-mails, drawing smiley faces on a white board may be a refreshing alternative that will require less administrative work than its high-tech alternatives.

This tired question has several variations and here is what I think.

If the question is, “Am I done testing this AUT?” the answer is, of course, no. There are an infinite number of tests to execute, so there is no such thing as finishing early in testing. Sorry. You should still be executing tests as your manager takes away your computer and rips you from your cubicle in an effort to stop you from logging your next bug. Or as Ben Simo’s 12 year old daughter puts it, you’re not done testing until you die or get really tired of it.

The more realistic question, “Is it time to stop testing this AUT?” probably depends on today’s date. We do the best we can within the time we are given. Some outside constraint (e.g., project completion date, ship date, you are reassigned to a different role) probably provides your hard stop. The decision of when to stop testing is not left up to the tester, although your feedback was probably considered early on… when nothing was really known about the AUT.

Finally, the question, “Am I done testing this feature?” is much more interesting and valuable to the tester. Assuming your AUT has multiple features that are ready for testing, you’ll want to pace yourself so attention is given to all features, or at least all important features. This is a balancing game because too much time spent on any given feature may cause neglect on others. I like to use two heuristics to guide me.

Popcorn Heuristic – I heard this one at Michael Bolton’s excellent Rapid Software Testing class. How do we know when a bag of microwave popcorn is finished popping? Bugs are discovered in much the same way. We poke around and start finding a few bugs. We look a little deeper and suddenly bugs are popping up like crazy. Finally, they start getting harder to find. Stop the microwave. Move on to the next feature, we’re done here!

Straw House Heuristic – I picked up this one from Adam White’s blog. Don’t use a tornado to hit a straw house. When we first begin testing a new feature, we should poke at it with some simple sanity tests and see how it does. If it can’t stand up against the simple stuff, we may be wasting our time with further testing. It’s hard to resist the joy of flooding the bug tracking system with easy bugs but our skills would be wasted, right? Make sure your devs agree with your assessment that said feature is not quite ready for your tornado, log some high level bugs, and ask them if they’ll build you a stone house. Then move on to the next feature, we’re done here!

What do you think?

Do you use bug templates? You should. Unfortunately, good bug reports require lots of overhead. It’s not enough to just enter your perfected repro steps. You have to enter a severity, priority, area, version tested, assign it to someone, etc. Because these tedious fields are often entered with the same values, you can make logging bugs a quicker and more pleasant experience by starting with a template that already has your typical entries. How? This, of course, depends on your bug tracking system. Lately, I’ve been using Microsoft VSTS or Mercury Quality Center (TestDirector).

If you use VSTS, download the TFS Work Item Templates Power Tools release. I used it to create various templates for the common chunks of bug entries I submit. All my templates also add starter text in the description field like “Repro Steps:” which is the heading above the repro steps.

If you use TestDirector, you can modify the Workflow scripts with simple VBScript additions that go beyond simply populating fields with defaults. For example, this script will automatically change one field when you update another field. I’m using it to assign the appropriate dev to the bug based on the area I found the bug in.
Search for “Workflow Scripts” in TestDirector’s help menu to get started. Note: you may have to ask your TestDirector admin for “Set Up Workflow” permissions.

Does anyone else use bug templates?

I hate when stuff works.

I think testing is at its most boring when stuff works. Yesterday my day was spent verifying one feature. It took me the entire day to set up and execute eight “happy” tests. The “happy” tests were all positive paths I needed to verify. All eight tests passed and I have to admit, I’m disappointed. If I don’t have a chunk of bugs logged at the end of a session I typically feel like I’ve accomplished nothing. And I figure the rest of the team must agree. Can anyone relate?

But this is silly because our purpose as testers is not to log as many bugs as we can. That would be too easy. We’re also supposed to determine what does work. The trouble is, it’s just not as fun. So how can we feel some sense of reward or accomplishment by verifying features that work?

Here are a few ideas…

1.) For starters your passed tests should be logged somewhere. In my case, they’re in the Test Lab module in Test Director (both automated and manual tests). If you use James Bach’s Session Based Test Management process, you have completed session sheets or charters stored in a directory. Seeing little green “PASS” indicators next to my eight tests helps me feel a little pleasure and reflects my work to some extent.

2.) Another thing I tried was to verbally tell my devs said feature worked well. Certainly they appreciated the complement and I felt as if they would appreciate my efforts.

3.) Perhaps I should have mixed up my session with “unhappy” tests. Obviously, said feature includes bugs. Maybe it’s worth keeping my enthusiasm high by allowing myself to sneak some nasty tests in with the friendly ones.

4. ) Or perhaps this is just a reality of software testing we need to accept. Testers only appear valuable when they uncover problems.

I would love to hear anyone else’s ideas on how to feel valuable during the dry spells, where your bug discovery is low or absent.

I love reading software tester blogs but sometimes I can't relate. Many of the topics are too academic to have any practical value for my daily testing struggles. Test blogs and forums often discuss test approaches (e.g., manual vs. automated, scripted vs. exploratory). These are interesting topics but many are outside my scope of control. I can influence my managers to some extent, but I also have to operate within the processes and tools they dictate.

I work for a QA group in a large company that is very metric hungry when it comes to testing. Most of my managers love manual detailed test cases, requirements coverage, and other practices that create administrative work for us testers, thereby reducing our available time for actual testing. In practice, I think most of my peers test the way I do, attacking a feature with an exploratory type approach, then updating execution results of a handful of test cases that give a vague and superficial representation of what was tested.

Recently, some of my managers have also decided we should attempt to automate most of our tests, which from their perspective, seems realistic and should free up our time because we can just fire off automated tests instead of wasting time with manual execution. One manager tells of how in the good old days when he was a tester, he would launch his automation suite and take the rest of the day off. This romanticized version of test automation is far from anything I can fathom...and I think he may be exaggerating.

So I'm left in the awkward position of trying to be a valuable tester from my manager's perspective but also from the perspective of the software team I support. My daily struggles are typically not very romantic and my ideas are not groundbreaking. However, I do feel myself improving with each question I answer. And I don't think I'm the only tester to waste energy on questions like these...

  • Did someone log this already?
  • How much more time should I spend investigating this bug?
  • Should I reopen the bug or log a new one?
  • Is it a bug?
  • Should I be embarrassed using a stop watch to performance test the Login screen?
  • Was that test worth automating?
  • Is it ready to test?
  • Should I log it without repro tests?
  • Am I bored?
  • Am I valuable?
  • Did I test this already?
  • Is my goal to find as many bugs as possible?
  • Who do I really serve?
  • Do my bugs suck?
  • Is my job lame?
  • Can I log a bug because I hate the way the UI looks?
  • Am I irritated with my AUT?
  • When is my job done?
  • Did my devs smoke crack while they wrote this?
  • Does anyone really get performance testing?
  • Does my pride hurt when my bugs get rejected?
  • What the hell is this feature supposed to do?
  • Should I be spending time logging bugs on the hourglass pointers that don't trigger?
  • Do I posses any special abilities or am I just an A-hole with the patience to submit another fake order for the 300th time?
Hopefully this blog will find its niche discussing the unromantic software test struggles of the hands-on tester. I'm not a manager. I'm not a consultant. I was never a professional developer. I'm not cool enough to have a beard with braids in it. I haven't written any testing books. And I haven't spoken on testing in front of a large audience. However, I have been practicing the art of testing software for the last seven years and I've experienced many practices that do and do not appear to work. I plan to share those here and I hope you will show me where I am wrong, offer your own solutions, or give me a pat on the back. See you soon.



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.