Yes, Michael Bolton is one of my biggest mentors.  And you’ve read a lot of fanboy posts on this blog.  But before I start spewing stuff from my RST notes, I want to post a disagreement I had with Michael Bolton (and RST).  After a 15 minute discussion, he weakened my position.  But I still disagree with this statement:

We don’t test to find out if something works.  We test to find out if it doesn’t work.

Here is a reason I disagree:  Knowing at least one way software can work, may be more valuable than knowing a thousand ways it can NOT work.

Example: Your product needs to help users cross a river.  Which is more valuable to your users? 

  • “hey users, if you step on these exact rocks, you have a good chance of  successfully crossing the river”
  • “hey users, here are a bunch of ways you can NOT cross the river: jump across, swim past the alligators, use the old rickety bridge, swing across on a vine, drain the river, dig a tunnel under it, etc.”

Users only need it to work one way.  And if it solves a big enough problem, IMO, those users will walk across the rocks.

Sure, finding the problems is important too.  Really important!  But if someone puts a gun to my head, and says I only get one test.  It’s going to be a happy path test. 

Bolton referred us to the following discussion between James Bach and Michael Kelly (  then click on “Is there a problem here?”).  I thought it would change my mind, as most James Bach lessons do.  It hasn’t…yet. 

I might be wrong.


  1. Danil said...

    Broken link? It might be that Michael Kelly isn't fond of deep linking his presentations.

  2. Aleksis Tulonen said...

    Personally I have thought that test automation is good way for testing if something works (happy path / requirements) and for example exploratory testing for if it doesn't. In most cases.

    I don't though understand why we need to make this a black and white situation. Don't we test if it work (happy path) and if it doesn't? Of course if we get the gun on our head, then it becomes black and white, but otherwise...

    Would be great to hear Michael opening up this subject on his own part.


    Aleksis Tulonen

  3. John Stevenson said...

    I used to have the same thoughts as you Eric that testing was about 'proving' the software can work.

    The problem is that we as testers can only prove it works depending on the following criteria.

    at that precise time
    with that precise date
    following exactly same the steps

    Now if I use the example you gave about the rocks and crossing the river. What happens if my foot is 1mm to much to the left or right would it now fail?

    What happens if I come back one year later and try again but this time the rocks are covered in slime. Would I be able to prove that I can still cross the river then?

    Yes as testers we can say at that time, with that data and as me as the user it works. What I cannot do as a tester with any guarantee is say it will work again in the exact same way.

    What we can do very easily is say that in these situations it does not work. It is far easier as testers to prove it does not work than attempt to try and prove it does work.

  4. Michael Bolton said...

    We don’t test to find out if something works. We test to find out if it doesn’t work.

    At very least, you're forgetting "usually": Usually, we don’t test to find out if something works; usually, we test to find out if it doesn’t work.

    You're also, I believe, picking one of two very different senses of "doesn't work". The sense you're considering, I think, involves the ways in which we might deliberately make a task impossible; irrelevant testing. There's another sense of "doesn't (or might not work", though: the sense in which even though the product fulfills some requirement or quality criterion to some degree, it makes life unpleasant or unsatisfactory for the user.

    Most of the time, we test with the goal of revealing information, and most of the time that information is about problems and risks. We might test to find out if something CAN work, but most of the time, that's not news. Most of the time, either we know that already or we discover it in early phases of testing. I'd argue that most of the time we hear, "Well, it can work" it comes as a feeble apology from a programmer or an unfortunate tech support person.

    Look at this list of possible reasons to test (it's from Cem Kaner,

    - Find important bugs, to get them fixed
    - Assess the quality of the product
    - Help managers make release decisions
    - Block premature product releases
    - Help predict and control product support costs
    - Check interoperability with other products
    - Check interoperability with other products
    - Find safe scenarios for use of the product
    - Assess conformance to specifications
    - Certify the product meets a particular standard
    - Ensure the testing process meets accountability standards
    - Certify that the product meets a particular standard
    - Minimize the risk of safety related lawsuits
    - Help clients improve product quality & testability
    - Help clients improve their processes
    - Evaluate the product for a third party

    How many of those items can be described as "knowing at least one way software can work?"
    It seems to me that "Find safe scenarios for use of the product", or finding a workaround for known problems, is pretty much the only clear instance on that list. Far more often, our task is to identify additional information. We investigate the problems and risks that surround the product and the project, so that the people who make decisions about the product can decide whether it's okay to leave the customer a step or two away from drowning.

    Yes: knowing that the product can work could be very important, and occasionally that might be our mission. But suppose you're getting into a car, or putting money into an investment fund, or being lifted onto an operating table. Just as you do so, the person who tested the product says, "It can work." What's your impression of the organization's knowledge of the product and of the attendant risks? Still eager to climb aboard?

    As for the idea that "users only need it to work one way..." Imagine that you go to Fado one night, and you ask for a pint of Guinness. The waitress brings you a Guinness. It's warm; it's flat; there's a fly in it; and it's in a glass with an enormous, razor-sharp crack in the rim. You begin to object. "Didn't you notice that problem?" She replies, "Hey, Bub... you asked me for a pint of Guinnesss, and that's what I brought you. You CAN drink it. You only need to drink it one way." Are you happy?

  5. Anonymous said...

    I don't know why the 2 have to be mutally exclusive.

    I like your point in that sometimes, it's more cost effective to ensure a product accomplishes a task than to try and uncover 50 ways in which it cannot.

    Also, I do not like the statement about testing to see if it doesn't work. More often than not, I'm not satisfied in knowing if it passes/fails. Rather, I want to know what happens when it fails.

    Simply checking for a pass/fail is a start, but testing involves much more than that.

    Enjoyed reading your perspective.

  6. Eric Jacobson said...

    Danil, thanks for pointing out the broken link! I corrected it.

  7. Eric Jacobson said...

    John, I agree that we can't "prove" something works. I also agree with your extension of my river-crossing metaphor (e.g., the rocks could slime). A skilled tester should use safety language when introducing a "working" feature.

    However, I disagree that it's easier to prove it can NOT work than to prove it can work. Using your same rationale, any attempt to "prove" it will not work could also be wrong.

    You could say "swimming across will not work because of the alligators". But maybe next year the alligators have all died because there weren't enough swimmers to eat. Now, something that didn't work, suddenly works.'ve got me thinking though...

  8. John Stevenson said...

    Thanks for the feedback Eric

    From your answer:

    "However, I disagree that it's easier to prove it can NOT work than to prove it can work"

    You have straight away fallen in to the bias that it is easy to prove my theory is correct rather than it is wrong.

    Science is built upon someone coming up with something they think works and then the whole of the scientific community reads and tries to repeat what they did to prove them wrong.

    What you experience when you say it works is positive bias (see here:

    This is an exercise I use in workshops to say to people there are far more ways to prove something is wrong than to prove it is right.

    Maybe I should have said - there are many more ways to prove it does not work than there are to prove it does.

  9. Rob Lambert said...

    Interesting post. I'm torn on this idea and it's made me think deep about it.

    I "think" I'm on the same track as you and have often struggled to understand why we focus so hard on the ways a product doesn't work, but have not yet been able to articulate these thoughts clearly.

    I've experienced many cases where software is released with one (or two) paths through that work, and many that don't. With good documentation, awareness, comms and hand-holding this has not been a problem - especially when the product is fixed and refactored later.

    Yet many comments here have made me think deeper about this and whether or not my thinking is clear on it.

    I believe a lot of this depends on the business and product and how much interaction the business may have with the customer/user.

    Interesting post and I'm thinking deeper. No doubt I'll formulate my thoughts on this at some random time. At which point I'll be back.


  10. Eric Jacobson said...


    Cool experiment. I'm pretty sure I understand confirmation bias. I couldn't follow your connection though.

    My response was probably not clear. I was just trying to point out that we can be just as wrong about our beliefs of what doesn't work as we can about our beliefs of what does work.

    I too can poke holes in my suggestion that crossing said rocks is likely to get you across the river. But that doesn't cancel its value.

    In the end, my user just needs to cross the river. If my user wants to give me more time, I'm happy to fill it by exploring river crossing scenarios that don't solve the problem.

    I so wish we could work through this over a beer tonight, John! It get's more interesting the deeper we go.

  11. Simon Morley said...

    Testing to find if something CAN or CAN'T work is only part of the story.

    If someone puts a gun to my head and said I only had limited time, I'd still ask them what the most important information was for them to know. They might need the "happy path" in order to demo some feature and gain future interest and investment.

    I'd still be telling them about all the things I don't know about the product - types of information we don't have (for whatever reason). This is the so-called silent evidence part of the testing story.

    When I hear a tester reporting on a product I usually ask, "tell me about what you don't know about the product" - this will typically generate a range of responses from befuddlement to a broad picture of information we don't know due to scope, time constraints, unforeseen circumstances, etc. (That's useful information for any follow-up coaching of the tester.)

    When your stakeholders have this information then they can think about the release decisions in a better way.

    "Mmm, test effort on different failure modes and recovery has been very low - that's ok for this first drop, OR, no we're flying blind - we need more information on this…."

  12. Aleksis Tulonen said...

    Let's continue with the river metaphor.

    If the dangers on the river (alligators, slime on the rocks, logs, etc.) represent bugs, we are trying to get rid of as many dangers as possible because we can't be certain about the way the user will cross the river.

    Yes, it's good to provide some tips on how we have been able to cross the river and we can even put signs (help/documents) next to the river to guide them to some direction, but there are always people who don't read signs. Or they may understand the signs different way we thought of.

    As I see, it seems like we are talking about same thing but different approaches.

    If we are testing to find out if software can work, we are exploring different ways of crossing the river.

    If we are testing to find out if the software doesn't work, we are exploring the dangers of the river and try to make it as safe as possible, and on the same time this will make the river possible to cross for the users.

    Both ways will find some of the dangers, but just with different approaches. And both ways also have the problem of facing the changing circumstances.

    Make any sense?

  13. Eric Jacobson said...


    Yes, I'm with you. I love your "tell me about what you don't know about the product" heuristic. That's brilliant. So which is more valuable to the stakeholder?

    a.) I don't know how to cross the river (but I can tell you about my failed attempts).

    b.) I don't know what problems you might encounter if you fail to step on these rocks (but I was able to cross the river this way).

  14. Eric Jacobson said...


    I love your documentation extension of the metaphor. Maybe we are splitting hairs. Probably. I like the way you explained. I had forgotten that in order to find walking-on-these-rocks can work, the tester may have discovered some things that did not work.

  15. Eric Jacobson said...


    You're right about my fear of testers focusing on irrelevant bugs. Okay, so that's not the kind of testing-to-find-out-if-it-doesn't-work you are talking about.

    I think your "usually" injection just finally clicked for me. I may have gotten there quicker if someone had put it to me like this:

    "Okay, Eric, so you start a test session by figuring out if it can work (e.g., crossing the river on these rocks can work). Now you have something valuable to share if the testing is forced to stop right now. But what if you have more time. Then what?"

    My answer would be:

    "Then I would start investigating how it might not work."

    What percentage of my time is normally spent on each? Mostly the later. Therefore, one could say, "we usually test to find out if it doesn't work" or "most of our time is spent finding out if it doesn't work". Is that what you are trying to say?

    I'm closer but still not seeing the light.

  16. V said...

    Hi Eric

    If my boss asked me that question, "It depends!" would be the perfect answer I have.

    I still remember the slogan you told us last year: “Yes, I can test it …”

    There are various cultures, business models, products and expectations from different kinds of customers. It’s not realistic to let stakeholders pick one or another.

    Depending on resource and schedule I have, I will provide information as
    + It works (working software)
    ++ It meets original expectations from developers, managers, marketing and customers
    ++ It does what it is supposed to do (one bye one, according to user manual or SDK / sample code)
    +++ It will NOT harm customers even if they abuse it in CERTAIN ways.
    +++ It DOES harm customers in certain cases. The possibility and better/worse cases are …
    ++++ Plan B to save customers from hurting with OUR products.

    "To do, or not to do" is manager’s business, what I need to do is to provide valuable information by order of priority.

  17. Eric Jacobson said...

    Thanks for your comments, Vash.

    So cool that you remember the "Yes, I can test it" post. I used that exact phrase in a meeting today and the business analysts seem pretty happy with the outcome and (I assume) my attitude.

  18. James Marcus Bach said...

    In my Rapid Testing class, the precise statement about what we do is worded like this:

    "We discover enough about whether it can work, and how it might not work, to infer whether it will work."

    CAN work is not the same as DOES work.

Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.