This article will be published in a future addition of the Software Test Professionals Insider – community news.  I didn’t get a chance to write my blog post this week so I thought I would cheat and publish it on my own blog first.

I will also be interviewed about it on Rich Hand’s live Blog Talk Radio Show on Tuesday, January 31st at 1PM eastern time. 

My article is below.  If it makes sense to you or bothers you, make sure you tune in to the radio show to ask questions…and leave a comment here, of course.

 

Don’t Test It

As testers, we ask ourselves lots of questions:

  • What is the best test I can execute right now?
  • What is my test approach going to be?
  • Is that a bug?
  • Am I done yet?

But how many of us ask questions like the following?

  • Does this Feature need to ever be tested?
  • Does it need to be tested by me?
  • Who cares if it doesn’t work?

In my opinion, not enough of us ask questions like the three above.  Maybe it’s because we’ve been taught to test everything.  Some of us even have a process that requires every Feature to be stamped “Tested” by someone on the QA team.  We treat testing like a routine factory procedure and sometimes we even take pride in saying...

“I am the tester.  Therefore, everything must be tested...by me...even if a non-tester already tested it...even if I already know it will pass...even if a programmer needs to tell me how to test it...I must test it, no exceptions!”

This type of thinking may be giving testers a bad reputation.  It emphasizes testing importance because of a thoughtless process rather than a service to provide the most valuable information to someone. 

James Bach came up with the following test execution heuristic:

Basic Heuristic:  “If it exists, I want to test it”

I disagree with that heuristic, as it is shown above and often published.  However, I completely agree with the full version James published when he introduced it in his 7/8/2006 blog post:

“If it exists, I want to test it. (The only exception is if I have something more important to do.)”

The second sentence is huge!  Why?  Because often we do have something more important to do, and it’s usually another test!  Unfortunately, importance is not always obvious.  So rather than measuring importance, I like to ask the three questions above and look for things that may not be worth my time to test.  Here are eight examples of what I’m talking about:

  1. Features that don’t go to production -  My team has these every iteration.  These are things like enhancements to error logging tables or audit reports to track production activity.  On Agile teams these fall under the umbrella of Developer User Stories.  The bits literally do not go to production and by their nature cannot directly affect users. 
  2. Patches for critical production problems that can’t get worse - One afternoon our customers called tech support indicating they were on the verge of missing a critical deadline because our product had a blocking bug.  We had one hour to deliver the fix to production.  The programmer had the fix ready quickly and the risk of further breaking production was insignificant because production was currently useless.  Want to be a hero?  Don’t slow things down.  Pass it through to production.  Test it later if you need to.
  3. Cosmetic bug fixes with timely test setup - We fixed a spelling mistake that had shown up on a screen shot of a user error message.  The user was unaware of the spelling mistake but we fixed it anyway; quick and easy.  Triggering said error message required about 30 minutes of setup.  Is it worth it?
  4. Straight forward configuration changes - Last year our product began encountering abnormally large production jobs it could not process.  A programmer attempted to fix the problem with an obvious configuration change.  There was no easy way to create a job large enough to cross the threshold in the QA environment.  We made the configuration change in production and the users happily did the testing for us.
  5. Too technical for a non-programmer to test - Testing some functionality requires performing actions while using breakpoints in the code to reproduce race conditions.  Sometimes a tester is no match for the tools and skills of a programmer with intimate knowledge of the product code.  Discuss the tests but step aside.
  6. Non-tester on loan - If a non-tester on the team is willing to help test, or better yet, wants to help test a certain Feature, take advantage of it.  Share test ideas and ask for test reports.  If you’re satisfied, don’t test it.
  7. No repro steps - Occasionally a programmer will take a stab at something.  There are often errors reported for which nobody can determine the reproduction steps.  We may want to regression test the updated area, but we won’t prevent the apparent fix from deploying just because we don’t know if it works or not.
  8. Inadequate test data or hardware - Let’s face it.  Most of us don’t have as many load balanced servers in our QA environment as we do in production.  When a valid test requires production resources not available outside of production, we may not be able to test it.

Many of you are probably trying to imagine cases where the items above could result in problems if untested.  I can do that too.   Remember, these are items that may not be worth our time to test.  Weigh them against what else you can do and ask your stakeholders when it’s not obvious.

If you do choose not to test something, it’s important not to mislead.  Here is the approach we use on my team. During our Feature Reviews, we (testers) say, “we are not going to test this”.  If someone disagrees, we change our mind and test it.  If no one disagrees, we “rubber stamp” it. Which means we indicate nothing was tested (on the work item or story) and pass it through so it can proceed to production.  The expression “rubber stamping” came from the familiar image of an administrative worker rubber stamping stacks of papers without really spending any time on each.  The rubber stamp is valuable, however.  It tells us something did not slip through the cracks.  Instead, we used our brains and determined our energy was best used elsewhere.

So the next time you find yourself embarking on testing that feels much less important than other testing you could be doing, you may want to consider...not testing it.  In time, your team will grow to respect your decision and benefit from fewer bottlenecks and increased test coverage where you can actually add value.

3 comments:

  1. Adam said...

    Eric,

    Great post as per usual. The points you raise about Not Testing something are very important to doing smart testing.

    When using this approach it's important to tell people EXPLICITY (yup, that needed to be all caps) about the things you aren't going test and that you tell them OFTEN. I keep a running section in my testing dashboard called "Things we aren't going test" along with a notes section that I use to record justification/reasoning. This way there are (usually) few surprises when we get near release date and people start asking about test coverage and Quality.

    It's especially tricky to explain not testing something, even if it is on purpose, outside of engineering. I once had to explain to a marketing person that we don't test everything in every release. This person just couldn't get past it no matter how many different ways I tried to explain it.

    When I go down the route of not testing something I try to back it up as much data as possible. At one company I was lucky enough to have product instrumentation that would send anonymous data on what features/functions customers were using. This was a gold mine for testers as we could clearly defend some of our "not testing" decisions with the customer data samples we had.

    When I don't have the automated collection of that information I proactively go looking for it. I might talk with sales, professional services, support. I was recently on a project where the OS/DB/webserver combos where bordering on insanity. Nobody was really sure what we tested and more importantly what we officially supported. One of the first things I did was put together a matrix of platforms I heard people claim support for based on what customers were running and combined this with what was being tested. Looking at the result I was able to take 2 OS and a webserver out of testing because everyone (PM, Dev, Test, Support) was willing to live with the risk/reward. I reduced the testing time, removed the overhead of environment setup and maintenance as well as giving everyone a clear picture of supported vs tested.


    Here are some questions that you might find useful to augment your list with. I use them, in addition to yours, to build my model.

    - Has this area of the code changed or been impacted by a change? Is there any reason to expect a problem that would be worth solving?
    - Which customers use this feature?
    - How important is it to them?
    - How important are they to us?
    - What would happen if the feature didn't work at all?
    - What if there was a workaround?


    BTW - I really like your bullet number 2. I ran a whole group of sustaining/escalations/3rd line developers using this premise as the foundation. It took some selling in the organization but it worked (most of the time :)

  2. Eric Jacobson said...

    Thanks for the comments, Adam. It's nice to hear from someone who can relate.

    1.) Great idea on looking at usage stats to help determine what not to test and to help argue why.

    2.) That's pretty bold of you to start knocking out apparent unused platforms from your testing. Lots of testers wouldn't bother. They would instead, just complain about all the testing they have.

    Finally, I like your six questions to ask before testing.

  3. More said...

    As a society we have way to many tests. I think we need to cut down.



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.