A tester asked me an interesting question this morning:

“How can I find old test documentation for a completed feature so I can re-use those tests on a similar new feature?”

The answer is easy.  But that’s not what this post is about. 

It seems to me, a skilled tester can usually come up with better tests…today, from scratch.  Test documentation gets stale fast.  These are some reasons I can think of:

  • A skilled tester knows more about testing today than they did last month.
  • A skilled tester knows more about the product-under-test today than they did last month.
  • The product-under-test is different today than it was last month.  It might have new code, refactored code, more users, more data, a different reputation, a different platform, a different time of the year, etc.
  • The available time to perform tests might be different.
  • The test environment might be different.
  • The product coder might be different.
  • The stakeholders might be different.
  • The automated regression check suite may be different.

If we agree with the above, we’ll probably get better testing when we tailor it to today’s context.  It’s also way more fun to design new tests and probably quicker (unless we are talking about automation, which I am not).

So I think digging up old test documentation as the basis for determing which tests to run today, might be a wrong reason to dig up old test documentation.  A good reason is to answer questions about the testing that was performed last month.

While reading Paul Bloom’s The Baby In The Well article in The New Yorker, I noted the Willie Horton effect’s parallel to software testing:

In 1987, Willie Horton, a convicted murderer who had been released on furlough from the Northeastern Correctional Center, in Massachusetts, raped a woman after beating and tying up her fiancĂ©. The furlough program came to be seen as a humiliating mistake on the part of Governor Michael Dukakis, and was used against him by his opponents during his run for President, the following year. Yet the program may have reduced the likelihood of such incidents. In fact, a 1987 report found that the recidivism rate in Massachusetts dropped in the eleven years after the program was introduced, and that convicts who were furloughed before being released were less likely to go on to commit a crime than those who were not. The trouble is that you can’t point to individuals who weren’t raped, assaulted, or killed as a result of the program, just as you can’t point to a specific person whose life was spared because of vaccination.

How well was a given application tested?  Users don’t know what problems the testers saved them from.  The quality may be celebrated to some extent, but one production bug will get all the press.

If you find an escape (i.e., a bug for something marked “Done”), you may want to develop an automated check for it.  In a meeting today, there was a discussion about when the automated check needed to be developed?  Someone asked, “Should we put a task on the product backlog?”.  IMO:
The automated check should be developed when the bug fix is developed.  It should be part of the “Done” criteria for the bug.
Apply the above heuristically.  If your bug gets deferred to a future Sprint, deffer the automated check to that future Sprint.  If your bug gets fixed in the current Sprint, develop your automated check in the current Sprint.

If a tree falls in the forest and nobody hears it, does it make a sound?  If you have automated checks and nobody knows it, does it make an impact?

To me, the value of any given suite of automated checks depends on its usage...


spectrum



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.