1. Spend time reporting problems that already exist in production, that users have not asked to fix.
  2. Demand all your bugs get fixed, despite the priorities of others.
  3. Keep your test results to yourself until you’re finished testing.
  4. Never consider using test tools.
  5. Attempt to conduct all testing yourself, without asking non-testers for help.
  6. Spend increasingly more time on regression tests each sprint.
  7. Don’t clean up your test environments.
  8. Keep testing the same way you’ve always tested.  Don’t improve your skills.
  9. If you need more time to test it, ask to have it pulled from the sprint, you can test it during the next sprint.
  10. Don’t start testing until your programmer tells you “okay, it’s ready for testing”.

If you made two lists for a given software feature (or user story):

  1. all the plausible user scenarios you could think of
  2. all the implausible user scenarios you could think of

…which list would be longer?

I’m going to say the latter.  The user launches the product, holds down all the keys on the keyboard for four months, removes all the fonts from their OS, then attempts to save a value at the exact same time as one million other users.  One can determine implausible user scenarios without obtaining domain knowledge.

Plausible scenarios should be easier to predict, by definition.  It may be that only one out of 100 users stray from the “happy path”, in which case our product may have just experienced an implausible scenario.

What does this have to do with testing?  As time becomes dearer, I continue to refine my test approach.  It seems to me, the best tests to start with are still confirmatory (some call these “happy path”) tests.  There are fewer of them, which makes it more natural to know when to start executing the tests for the scenarios less likely to occur.

image

The chart above is my attempt to illustrate the test approach model I have in my head.  The Y axis is how plausible the test is (e.g., it is 100% likely that users will do this, it is 50% likely that users will do this).  The X axis represents the test order (e.g., 1st test executed, 2nd test executed, etc.).  The number of tests executed is relative.

Basically, I start with the most plausible tests, then shift my focus to the stuff that will rarely happen.  These rare scenarios at the bottom of the chart above can continue forever as you move toward 0% plausibility, so I generally use the “Times Up” stopping heuristic.  One can better tackle testing challenges with this model if one makes an effort to determine how users normally use the product.

I often hear contradictory voices in my head saying, “don’t start with confirmatory tests, the bugs are off the beaten path”.  Okay, but are they really?  If our definition of a bug is “something that bugs someone who matters”, then the problems I find on the bottom of the above chart’s line, may matter less than those found on the top.  Someone who matters, may not venture to the bottom. 

For more on my thoughts (and contrary thoughts) on this position see We Test To Find Out If Software *Can* Work.



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.