Sometimes, the most feasible way to test something, is to let it soak in an active test environment for several weeks. Examples:
- No repro steps but general product usage causes data corruption. We think we fixed it. Release the fix to an active test environment, let it soak, and periodically check for data corruption.
- A scheduled job runs every hour to perform some updates on our product. We tested the hourly job, now let’s let it run for two weeks in an active test environment. We expect each hourly run to be successful.
Per Google, soak testing involves observing behavior whilst under load for an extended period of time. In my case, load is normally a handful of human testers, as opposed to a large programmatic load of thousands. Nevertheless, the term is finally catching on within my product teams.
Who cares about the term? I like it because it honestly describes the tester effort, which is very little. It does not mislead the team into thinking testers are spending much time investigating something. It’s almost like not testing. But yet, we still plan to observe from time to time and eventually make an assessment of success or failure.
Be sure to over-annunciate the “k” in “soak”. People on my team thought I was saying “soap” test. I’m not sure what a soap test is…but I’m sure it exists too!
The Proceedings of the National Academy of Sciences just published a study, “Extraneous factors in judicial decisions”, that finds decision making is mentally taxing and when people are forced to continually make difficult decisions, they get tired and begin opting for the easiest decision.
Eight parole board judges were observed for 10 months, as they ruled whether or not to grant prisoners parole. The study noticed a trend. Near the end of work periods, prisoners being granted parole dropped significantly. The decision of granting parole takes much longer to explain and involves more work than the decision to deny parole.
(For those of you reading my blog from prison, try to get your parole hearing scheduled first thing in the morning or right after lunch.)
What does this remind you of?
Testing! I don’t have the ambition to perform said study on test teams, but I have certainly experienced the same pattern. I’m guessing fewer bugs get logged latter in the day. The decision that you found a bug is a much more difficult decision than denial. Deciding you found a bug means investigating, logging a report, convincing people sometimes, testing the fix, regression testing what broke, etc.
I wrote about Tester Fatigue and suggested solutions in How To Combat Tester Fatigue. But according to the above study, taking breaks from testing is paramount. Therefore, I will now head downstairs for some frozen yogurt.
When programmers log bugs, us testers are grateful of course. But when programmer-logged bugs travel down their normal work flow and fall into our laps to verify, we’re sometimes befuddled…
”Hey! Where are the repro steps? How can I simulate that the toolbar container being supplied is not found in the collection of merged toolbars?”
I used to insist that every bug fix be tested by a tester. No exceptions! Some of these programmer-logged bugs were so technical, I had to hold the programmer’s hand through my entire test and test it the same way the programmer already tested it. This is bad because my test would not find out anything new. Later I realized I’m not only wasting the programmer’s time, I’m also wasting my time; from other new tests I could be executing.
Sometimes, it’s still good to waste time for the sake of understanding but don’t make it a hard and fast rule for everything. Instead, you may want to do as follows:
- Ask the programmer how they fixed it and tested their fix. Does it sound reasonable?
- Ensure the critical regression tests will be run in the patched module, before production deployment.
Then rubber stamp the bug and spend your time where you can be more helpful.