If you made two lists for a given software feature (or user story):

  1. all the plausible user scenarios you could think of
  2. all the implausible user scenarios you could think of

…which list would be longer?

I’m going to say the latter.  The user launches the product, holds down all the keys on the keyboard for four months, removes all the fonts from their OS, then attempts to save a value at the exact same time as one million other users.  One can determine implausible user scenarios without obtaining domain knowledge.

Plausible scenarios should be easier to predict, by definition.  It may be that only one out of 100 users stray from the “happy path”, in which case our product may have just experienced an implausible scenario.

What does this have to do with testing?  As time becomes dearer, I continue to refine my test approach.  It seems to me, the best tests to start with are still confirmatory (some call these “happy path”) tests.  There are fewer of them, which makes it more natural to know when to start executing the tests for the scenarios less likely to occur.


The chart above is my attempt to illustrate the test approach model I have in my head.  The Y axis is how plausible the test is (e.g., it is 100% likely that users will do this, it is 50% likely that users will do this).  The X axis represents the test order (e.g., 1st test executed, 2nd test executed, etc.).  The number of tests executed is relative.

Basically, I start with the most plausible tests, then shift my focus to the stuff that will rarely happen.  These rare scenarios at the bottom of the chart above can continue forever as you move toward 0% plausibility, so I generally use the “Times Up” stopping heuristic.  One can better tackle testing challenges with this model if one makes an effort to determine how users normally use the product.

I often hear contradictory voices in my head saying, “don’t start with confirmatory tests, the bugs are off the beaten path”.  Okay, but are they really?  If our definition of a bug is “something that bugs someone who matters”, then the problems I find on the bottom of the above chart’s line, may matter less than those found on the top.  Someone who matters, may not venture to the bottom. 

For more on my thoughts (and contrary thoughts) on this position see We Test To Find Out If Software *Can* Work.


  1. Unknown said...

    I Agree, I think the plausible tests should be executed first... and last also!

    I've been in some situations where after finding a lot of bugs with not-so-plausible tests and after the bugs got fixed, the 'happy' tests no longer pass due to all the modifications made that were focused based on less plausible tests.

  2. Michael said...

    I agree that plausible tests should be done first, especially in the domain I work in - outsourcing.
    Releases are quite frequent and testing has to be relatively fast, so the most relevant tests, in the areas the customer is the most interested in have to be done first.
    After the most relevant test cases have been exhausted, other, less important test cases have to be made so time is best used to cover all the functionality.

  3. Dan said...

    I think that the execution of plausible and less plausible test cases should also take into consideration the software release cycle.

    In pre-alpha and alpha the plausible test cases should matter the most and when alpha to beta is very close, the testers should start to include in their tests less plausible test cases along with plausible test cases.

    Also I've encountered situations when a developer didn't fix a bug, because for him that situation didn't look plausible enough.

  4. Sree said...

    Nice approach towards the issue. Even I agree with your view that we need to concentrate more on testing the part where max % of possibility is there when the time is running off.

Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.