An import bug escaped into production this week.  The root cause analysis took us to the usual place; “If we had more test time, we would have caught it.”

I’ve been down this road so many times, I’m beginning to see things differently.  No, even with more test time we probably would not have caught it.  Said bug would have only been caught via a rigorous end-to-end test that would have arguably been several times more expensive than this showstopper production bug will be to fix. 

Our reasonable end-to-end tests include so many fakes (to simulate production) that their net just isn’t big enough.

However, I suspect a mental end-to-end walkthrough, without fakes, may have caught the bug.  And possibly, attention to the “follow-through” may have been sufficient.  The “follow-through” is a term I first heard Microsoft’s famous tester, Michael Hunter, use.  The “follow-through” is what might happen next, per the end state of some test you just performed.

Let’s unpack that:  Pick any test, let’s say you test a feature to allow a user to add a product to an online store.  You test the hell out of it until you reach a stopping point.  What’s the follow-on test?  The follow-on test is to see what can happen to that product once it has been added to the online store.  You can buy it, you can delete it, you can let it get stale, you can discount it, etc…  I’m thinking nearly every test has several follow-on tests.

3 comments:

  1. lucian said...

    Using this follow-through mental model is how we get to exploratory testing. What is exploratory testing other than a serie of follow-through test sessions, while keeping in mind the target condition of the system or the target end-state for the functionality journey?

    What I get from your initial story is that some or more exploratory testing might have been needed, increasing the probability of finding that defect.

  2. Pop! said...

    Hi Eric, very interesting thoughts. Thanks.
    You and your readers might find this a useful way to implement what you saying - GTP (Graphical Test Planning) is a methodology developed to extract the thinking around test cases and capture it.
    It is carried out anytime - right from the start of the project to the end - whenever is most convenient. The result is an Observable Behaviour Model in the form of a diagram (SRD).
    This identifies potential test cases from every known behaviour - parent-child-etc. - found through hands-on testing, discussion, docs, whatever method.
    The tester is then free to execute or script any of the tests they choose.
    The key is it requires thinking about what a system should do as well as what it does - the 'follow-through' as you put it.
    A bonus side-effect is that huge amounts of test planning and coverage data comes out of this for free.
    You can find a interview with David Bradley at StarWEST 2015 on this and more details at these links:
    http://www.stickyminds.com/interview/graphical-test-planning-method-real-impact-starwest-2015-interview-david-bradley
    https://sites.google.com/site/gtpfortest/

  3. Amandeep Singh said...

    I think many of us already follow the 'follow-through' strategy.. just that I didn't ever thought of giving it a nice name; I think I always considered checking the impact of last step on to the next one as part of end-to-end checks.

    For example, if I delete the product from the basket, the next check would be to see if it actually deleted the product and if yes or no, where does the user navigate to or how does the basket look like.

    Cheers!



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.