So you’ve got 10 new features to test in about 25% of the time you asked for…just another day in the life of a tester. How do you approach this effort?
Here is how I approach it.
- First, I sift through the 10 features and pick out the one that will have the most critical bugs (call it FeatureA). I test FeatureA and log two or three critical bugs.
- Next, I drop FeatureA and repeat the above for the feature that will have the next most critical bugs (call it FeatureB). I know FeatureA has undiscovered bugs. But I also know FeatureA’s critical bug fixes will trigger FeatureA testing all over again. I also assume some non-discovered FeatureA bugs will be indirectly fixed by the first batch of bug fixes. I am careful not to waste time logging “follow-on-bugs”.
- When bug fixes are released, I ignore them. I repeat the above until I have tested all 10 new Features with the first pass.
- At this point something important has occurred. The devs and BAs know the general state of what they are most interested in.
- Finally, I repeat the above with additional passes, verifying bug fixes with each feature. As the features gradually become verified I communicate this to the team by giving the features a status of “Verified”. I use my remaining time to dig deeper on the weak features.
Okay, nothing breakthrough here, but there are two tricks that should stand out in the above.
Trick 1 – Don’t spend too much time on individual features in the first pass. You want to provide the best info to your devs as early as possible for all 10 Features. It’s way too easy to run out of time by picking one Feature clean.
Trick 2 – Ignore those bug fixes until you get through your first pass with all 10 Features. I know it’s hard. You’re so anxious to see the damn thing fixed. However, IMO, the unknowns of untested Features are more valuable to chase down than the unknowns of whether bugs are fixed. In my experiences, when I log bugs well, verifying them is a brainless rubber-stamping-activity.
How do you get the job done?