Richard Siemens responded to my Testers, Stay Frosty. Fight Tester Fatigue post and asked if I have any ideas on how to combat tester fatigue. I’m glad you asked, Richard, because my previous post was too lame to suggest any.
Here is what I think:
Stuff the tester can do to combat test fatigue:
- Treat yourself to static goals on occasion. For example, if you say “my goal today is to verify all the fixed bugs currently on this list”, it has a firm stopping point; the current list does not include the bugs that will be on that list in an hour. However, if you say “my goal today is to verify all the fixed bugs”, I’ll bet the devs are cranking out the fixes just as quickly as you verify them and the list just won’t clear. How unsatisfying. It’s like walking up the down escalator.
- Use an approach similar to Session Based Test Management where you follow a time-blocked mission. As you encounter bugs that feel annoying because they take you off track, just ignore them! That’s right…at least ignore them for now. Make a quick note, then go back and investigate said bugs when you and your team decide it is time.
- Show your manager a list of all the stuff you need to do and ask them to prioritize it. You may be surprised at some of the stuff with a low priority. In fact, if you make that list long enough, you’ll probably get some stuff knocked off it.
- Stop Testing and do something else. Take time each week to improve your testing skills. And don't tell me you're too busy. I don't buy it. It’s those testers who get complacent and test the same way day-after-day that make us look bad. Take a few hours each week to read testing blogs, start your own blog, write a program, improve your typing skills, read a chapter from a testing book, or just hang out in the break room chatting it up with your support team. Any decent manager will appreciate a happy employee taking a break to become a better tester in some way. Because any decent manager knows a better tester gets more work done when they do test.
Stuff the test manager can do to combat test fatigue:
- Be more careful about what you reward. Instead of placing so much weight on completing test work on time. Go out of your way to reward discovered bugs, even right before deadlines; “Nice bug catch, that was close!”.
- Assign non-testing tasks to testers. As testers, it’s refreshing to work on non-testing tasks every so often because testers can control non-testing tasks. These have hard and fast completion criteria and you only have to worry about doing it one way. Examples include:
Organizing regression tests
Building test metric report
Giving tester presentations
Organizing a team outing
Facilitating a retrospective
Updating test environments
Engineering process improvements
- Let the whole development team know how much work the testers are doing. Let the testers know too. An easy way to do this is nightly team test status emails.
- Encourage a rest period during test cycles.
Every morning, before work, I put in 40 minutes on my YMCA’s StairMaster, whilst reading my Economist. I use the “Interval” workout setting, which requires two input parameters; Workout Level & Rest Level. Over the last two years, I’ve steadily been increasing the Workout Level, while keeping the Rest Level about the same. When the machine kicks in to the Workout Level, I give it everything I’ve got. Three minutes later, when I’m about to give up, it rewards me with the Rest Level and man does it feel good. Then the cycle repeats.
I think testers should follow a model similar to my StairMaster’s Interval workout. My teams have 4 week iterations. I stay out of everybody’s hair during week 1 and try to set the mood that it's a rest week. We still attempt to test as early as possible, we just don’t crank up the intensity until about week 2 and 3. We finish all new feature testing by end of week three, then begin to come back down during week 4 by concentrating on regression testing.
Just like my workouts, as we grow our tester skills, product knowledge, and team cadence, we should be able to increase our test level with each iteration, while allowing ourselves to maintain the same rest level.
4 comments Posted by Eric Jacobson at Thursday, November 11, 2010
Based on my own test experiences and those of my testers, I've noticed the following.
At the start of a test cycle, if your test fails, your likely reaction is:
“Yeah, baby, it failed! Yesssss! I rock!”
At the end of a test cycle, if your test fails, your likely reaction approaches:
“Damn! I can’t believe it failed. We’re never going to get this done in time. ...On second thought, maybe it didn’t actually fail. Maybe I did something wrong, I heard the DBA was doing some kind of maintenance, maybe that was the problem. Besides, the production servers are much faster, I’m sure they would work better. Perhaps if I reboot and try again, it will work. Then I won't have to tell anybody.”
Can you relate on some level? Finding bugs in fresh software gives us a rush. We joke about it; “Let me sink my teeth into your code!”. But after a while, we get tired of finding bugs. We just want stuff to work so we can move on. As we approach the ship date, we start to feel frustrated when stuff crashes. We’re actually…wait for it…disappointed to find another bug. We wish the test had passed.
Testers, be careful. Don’t ever let yourself grow tired of finding problems. When that happens, your ability to investigate diminishes and your team value drops. I call this “Tester Fatigue”. Being aware of tester fatigue is probably all you need to know to avoid it and stay frosty.
1 comments Posted by Eric Jacobson at Friday, November 05, 2010
It’s just not fair.
The better we test, the more we appear to not meet our deadlines.
Skilled testers provide more feedback than unskilled testers. The skilled testers find more bugs and raise more questions. The more bugs found, the more testing is required to verify the bugs. The more bugs that are fixed, the more testing is required to regression test.
The unskilled tester scratches the surface. If no bugs or questions are discovered and little feedback (e.g., test results) is produced, the unskilled tester calls it a day at 5PM everyday and naively goes home to watch TV. It’s possible to get away with this, especially when the missed defects are never discovered in production, and those that are, may be written off as too difficult to catch in test. Poor performers can hide well in the test world. You may know some.
What can we do about this frustrating injustice?
Reduce feature ownership.
The above paradigm may be partly the result of feature ownership. If the testers are each assigned certain features to test and therefore only responsible for seeing those features through to production, we see the unskilled tester rewarded with easily meeting deadlines, and the skilled tester pulling her hair out, trying to keep up.
Test managers have some control over this. They can ask the unskilled tester to assist the skilled tester with less cognitive tasks, such as bug verification or regression testing. This helps accentuate the team mentality, that nobody goes home until all features are fully tested. Most Agile teams are already doing this but I suspect the unskilled testers still manage to provide less value on Stories they pull from the task board.
Deadlines are not the main goal.
In most trades, we reward people for getting work done on time. Perhaps in testing, we should stop doing this. It’s almost as if we should do the opposite; reward testers for managing to keep the team busy fixing problems and thus not meeting the deadline.
I’m exaggerating, of course, but when we tell testers to “get this tested well and on time”, there is a conflict of interest. To make matters worse, it’s easier to look at a clock and say “great job, tester, you completed the testing on time” than it is to look at a piece of software and say “great job, tester, you tested this well”.
Let’s not forget to celebrate the efforts of those testers who always seem to be swamped and having a tough time meeting team test deadlines. They need a break sometimes too.