Another of Robert Sabourin’s STPCon sessions I attended was “Deciding What Not to Test”.

Robert did not give me the magic, simple answer, I was hoping for. His main point was, it’s better to not test certain things because you decided not to, rather than because you ran out of time.

I agree but I’m not convinced this is practical. In order to decide what not to test, one must spend time determining lots of extra tests that will be thrown out. I don’t like that. If there are an infinite number of tests to execute, when do I stop coming up with them? Instead, I prefer to come up with them as I test within my given testing time. My stopping point then becomes the clock. I would like to think, as a good tester, I will come up with the best tests first. Get it?

Maybe “which tests should I NOT execute?” is the wrong question. Maybe the better question is “which tests should I execute now?”.

At any rate, it is feasible that a tester often finds themselves in a situation where they have too many tests to execute in the available time, er…time they are willing to work. When this situation arises, Robert suggests a few questions to help prioritize:

1.) What is the risk of failure?

2.) What is the consequence of failure?

3.) What is the value of success?

Here are my interpretations of Robert’s three questions:

1.) A tester can answer this. Does this dev usually create bugs with similar features? How complex is this feature? How detailed were the specs or how likely is it that the correct info was communicated to the dev?

2.) It should be answered from the stakeholder’s perspective. Although, a good tester can answer it as well. It all comes down to this question: will the company lose money?

3.) This one should also be answered from the stakeholder’s perspective. If this test passes, who cares? Will someone be relieved that the test passed?

So if you answer “high” to any of Robert’s three questions, I would say you had better execute the test.

Do you have any better advice on knowing what not to test? If so, please share!


  1. Zachary Fisher said...

    Man, this was a very timely post. I needed to read something to give clarity to test crunch happening right now. Boy. This was a Godsend.

  2. Eric Jacobson said...


    Really? Which part was helpful? When you get out of your crunch please share your tips. Thanks!

  3. Michele Smith said...

    "His main point was, it’s better to not test certain things because you decided not to, rather than because you ran out of time."

    In my imagination: The manager calls me into his office and asks me why the customers are calling about feature X not working correctly.

    I tell him, "I decided not to test that because I wanted to focus my time on feature Y."

    I cannot imagine that being an acceptable answer.

    It is better to me that I run out of time. It is possible to go for high-level breadth of the application in test and not cover the depth of everything. In my opinion, covering the application is better than selecting what to test or what not to test. Developers miss bugs in what they themselves have designed. And I am supposed to determine, based upon what has been told to me by developers, that only areas A,B, and C have fixes or changes in them? I have found bugs in areas of an application where the developers swear they did not change any code. Sometimes bugs move, testers know that.

    The thought of selecting not to test certain things because I decided not to makes me picture my manager dressed up like Dirty Harry saying, "Do you feel lucky, punk?" And , like Dirty Harry, my manager is a pretty good guy.

    Myself, I would rather run end to end in the application at a high level (breadth) than have my focus in some areas and not in others. And that is what I try to do for every project that I am on. I prefer to continue to build my testing skills and use the Rapid Testing approach than to select what I will and won't test, based on what?

    Interesting topic, Eric. Made me have to process the reason why it was not settling well with me.

  4. Michael Bolton said...

    I would like to think, as a good tester, I will come up with the best tests first.

    I'd like to think that too. But best compared to what?

    Here's a set of heuristics for you to consider; I think they represent some of the essence of Rob's perspective on this (which I share).

    a) Thought is fast and cheap.
    b) A test can be fast and cheap, but usually not as quick as a thought.
    c) A test reveals information that a thought might not.
    d) We can think not only about tests, but about categories of tests, about categories of test techniques, about oracles, about coverage. So we can consider (and optionally accept or reject or prioritize) a whole slew of tests at once.
    e) We can't ever be sure we're right, but if we pause even for a moment to consider some alternatives, we can (fallibly) reduce the chance that we'll miss something important.

    ---Michael B.

  5. Eric Jacobson said...


    Your heuristics argue your point well (except c). I guess thinking about possible tests can take place quicker than I initially thought. I think your “best compared to what?” question is what really made me think, though.

    Okay, so my original thinking was:

    A test that results in the most valuable information jumps out for the tester. Execute and repeat the above until some stopping heuristic occurs.

    The revised approach may be:

    A test that results in the most valuable information is determined by the tester after comparing it with tests that result in less valuable information. Execute and repeat the above until some stopping heuristic occurs.

  6. Eric Jacobson said...


    Thanks for sharing! It's sad that we can't tell such things to managers.

    You raise an interesting question. high level breadth vs. low-level targetted areas. Hmmmm...

Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.