I’ve been testing this darn thing all morning and I haven't found a single bug, or even an issue.  My manager probably thinks I’m not testing well enough.  My other tester colleagues keep finding bugs in their projects.  Maybe I’m not a very good tester.  My next scrum report is going to be lame.  This sucks, man.

Wrong!  It probably doesn’t suck.  Not finding bugs may be a good thing.  Your team may be building stuff that works.  And you get to be the lucky dude who delivers the good news. 

If there is lots of stuff that works and no bugs, you have even more to report than testers who keep finding bugs.  Testers who keep finding bugs are probably executing fewer tests than you so they know less about their products than you.  Instead of figuring out what works, they are stuck investigating what doesn’t work.  They’ll still need to figure out what works eventually, it’s just going to take them a while to get there.  And that sucks.

My manager is probably looking at my low bug count metric, thinking I’m not doing anything.  Logging bugs makes me feel like a bad ass.  There must be something I can log…hmmm…I know, I’ll log a bug for this user message; it’s not really worded as well as it could be…it has been like that for the last four years.

No!  No!  No!  That’s gaming the system.  It’s not going to work.  You’re going to get a reputation as a tester who logs trivial bugs.  Your manager is only counting bugs because you’re not giving her anything else.  She just wants to know what you’re doing.  Help your manager.  Show her where to find your test reports, session sheets, or test execution results.  Invite her to your scrum meetings. Tell her how busy you’ve been knocking out tests and how bad ass your entire project team is.

Think about it. 

Reporting what works may be better than reporting trivial bugs.

This article will be published in a future addition of the Software Test Professionals Insider – community news.  I didn’t get a chance to write my blog post this week so I thought I would cheat and publish it on my own blog first.

I will also be interviewed about it on Rich Hand’s live Blog Talk Radio Show on Tuesday, January 31st at 1PM eastern time. 

My article is below.  If it makes sense to you or bothers you, make sure you tune in to the radio show to ask questions…and leave a comment here, of course.

 

Don’t Test It

As testers, we ask ourselves lots of questions:

  • What is the best test I can execute right now?
  • What is my test approach going to be?
  • Is that a bug?
  • Am I done yet?

But how many of us ask questions like the following?

  • Does this Feature need to ever be tested?
  • Does it need to be tested by me?
  • Who cares if it doesn’t work?

In my opinion, not enough of us ask questions like the three above.  Maybe it’s because we’ve been taught to test everything.  Some of us even have a process that requires every Feature to be stamped “Tested” by someone on the QA team.  We treat testing like a routine factory procedure and sometimes we even take pride in saying...

“I am the tester.  Therefore, everything must be tested...by me...even if a non-tester already tested it...even if I already know it will pass...even if a programmer needs to tell me how to test it...I must test it, no exceptions!”

This type of thinking may be giving testers a bad reputation.  It emphasizes testing importance because of a thoughtless process rather than a service to provide the most valuable information to someone. 

James Bach came up with the following test execution heuristic:

Basic Heuristic:  “If it exists, I want to test it”

I disagree with that heuristic, as it is shown above and often published.  However, I completely agree with the full version James published when he introduced it in his 7/8/2006 blog post:

“If it exists, I want to test it. (The only exception is if I have something more important to do.)”

The second sentence is huge!  Why?  Because often we do have something more important to do, and it’s usually another test!  Unfortunately, importance is not always obvious.  So rather than measuring importance, I like to ask the three questions above and look for things that may not be worth my time to test.  Here are eight examples of what I’m talking about:

  1. Features that don’t go to production -  My team has these every iteration.  These are things like enhancements to error logging tables or audit reports to track production activity.  On Agile teams these fall under the umbrella of Developer User Stories.  The bits literally do not go to production and by their nature cannot directly affect users. 
  2. Patches for critical production problems that can’t get worse - One afternoon our customers called tech support indicating they were on the verge of missing a critical deadline because our product had a blocking bug.  We had one hour to deliver the fix to production.  The programmer had the fix ready quickly and the risk of further breaking production was insignificant because production was currently useless.  Want to be a hero?  Don’t slow things down.  Pass it through to production.  Test it later if you need to.
  3. Cosmetic bug fixes with timely test setup - We fixed a spelling mistake that had shown up on a screen shot of a user error message.  The user was unaware of the spelling mistake but we fixed it anyway; quick and easy.  Triggering said error message required about 30 minutes of setup.  Is it worth it?
  4. Straight forward configuration changes - Last year our product began encountering abnormally large production jobs it could not process.  A programmer attempted to fix the problem with an obvious configuration change.  There was no easy way to create a job large enough to cross the threshold in the QA environment.  We made the configuration change in production and the users happily did the testing for us.
  5. Too technical for a non-programmer to test - Testing some functionality requires performing actions while using breakpoints in the code to reproduce race conditions.  Sometimes a tester is no match for the tools and skills of a programmer with intimate knowledge of the product code.  Discuss the tests but step aside.
  6. Non-tester on loan - If a non-tester on the team is willing to help test, or better yet, wants to help test a certain Feature, take advantage of it.  Share test ideas and ask for test reports.  If you’re satisfied, don’t test it.
  7. No repro steps - Occasionally a programmer will take a stab at something.  There are often errors reported for which nobody can determine the reproduction steps.  We may want to regression test the updated area, but we won’t prevent the apparent fix from deploying just because we don’t know if it works or not.
  8. Inadequate test data or hardware - Let’s face it.  Most of us don’t have as many load balanced servers in our QA environment as we do in production.  When a valid test requires production resources not available outside of production, we may not be able to test it.

Many of you are probably trying to imagine cases where the items above could result in problems if untested.  I can do that too.   Remember, these are items that may not be worth our time to test.  Weigh them against what else you can do and ask your stakeholders when it’s not obvious.

If you do choose not to test something, it’s important not to mislead.  Here is the approach we use on my team. During our Feature Reviews, we (testers) say, “we are not going to test this”.  If someone disagrees, we change our mind and test it.  If no one disagrees, we “rubber stamp” it. Which means we indicate nothing was tested (on the work item or story) and pass it through so it can proceed to production.  The expression “rubber stamping” came from the familiar image of an administrative worker rubber stamping stacks of papers without really spending any time on each.  The rubber stamp is valuable, however.  It tells us something did not slip through the cracks.  Instead, we used our brains and determined our energy was best used elsewhere.

So the next time you find yourself embarking on testing that feels much less important than other testing you could be doing, you may want to consider...not testing it.  In time, your team will grow to respect your decision and benefit from fewer bottlenecks and increased test coverage where you can actually add value.

Who tests the automated tests? Well, we know the test automation engineer does.  But per our usual rationale, shouldn’t someone other than the test automation engineer test the automated tests?  After all, they are coded…in some cases by people with less programming experience than the product programmers.

We’re experimenting with manual testers testing automated tests on one of my project teams.  The test automation engineer hands each completed automated test to a manual tester.  The manual tester then executes the automated test.  At this point the manual tester is testing two things at the same time.  They are:

  1. Testing the automated test and
  2. testing whatever the automated test tests

Or, to be more precise, we can use Michael Bolton speak and say the tester is:

  1. Testing the automated check and
  2. checking whatever the automated check checks

Whatever you call it, during this exercise, it’s important to distinguish the above two activities.  If the automated test's execution results in a “Fail”, it doesn’t mean the test of the automated test fails…but it may.  Are you with me still?  An automated test’s execution result of “Fail” may, in fact, mean the automated test is a damn good test.  It may have found a bug in the product under test.  But that is completely up to the manual tester who is testing the automated test.  One cannot trust the expected result of an automated test until one has finished testing the automated test.

Thus, the tester of the automated test will need to evaluate the test somehow and declare it to be a good test.  They may be able to do this several ways:

  • Manipulate the product to see if the test both passes and fails under the right conditions.
  • Execute the automated test by itself then as part of the test suite to determine if the setup/teardown routines adapt sufficiently.
  • Read the automated test’s code (e.g., does the Assert check the intended observation correctly?).
  • Manually test the same thing the automated test checks.
  • Manipulate the product such that the test cannot evaluate its check. Does the test resolve as “Inconclusive”?

It would be nice to have the luxury of time/resources to test the automated checks thoroughly but in the end, I suspect we will have to draw the line somewhere and trust the test automation engineer’s test of their check.  In the meantime, we’ll see where this gets us.

Dear project team,

This year I will…

  • target my testing to find you the right information sooner and trust your decision to ship early.
  • not test everything just because it is possible to test.  Instead, I’ll spend my energy where I think my services are most valuable to you.  I’ll tell you what I decide not to test and why.  If you disagree, I will change my plan and test it.
  • consider the idea to stop executing tests that I’m 99.99% sure will pass.
  • not stress out about test deadlines.  When I run out of time I will share that with you; in the past you have either jumped in to help or given me more time.  It typically is not as terrible as I anticipate.  Nevertheless, I promise not to be a slacker because it doesn’t feel nearly as good as being an over-achiever.
  • swallow my pride and ask you questions sooner, rather than hoping I understand later (even though I enjoy self-education via independent exploring and experimenting).
  • pay more attention to the business needs behind the things I test, as they will help me focus my test coverage.
  • look for ways to increase my value to you, like giving impromptu test reports, offering to log your bugs, and testing breadth before depth to at least catch the obvious ones early.
  • learn something about Selenium because everyone keeps talking about it and I feel dumb not knowing much about it.  And who knows, someday I may have an opportunity to test a website for a change.
  • congratulate you on good work and take more interest in your achievements as BAs, Programmers, and CMs.
  • read that Data Warehouse Toolkit book by Kimball that you keep referring to.  I’m sure much of it will be boring but I think it will help me respect your development efforts and determine new test ideas.  It should also increase my Data Warehouse vocabulary.
  • squeeze time out of each day for learning something new about testing because fresh ideas make my job more interesting and make me a better tester.  I will share these test ideas with you for the fun of it.  Who knows, it may lead to something we can use here on a project.
  • stay late to meet deadlines or accommodate production releases…sometimes.  Not often, hopefully.  But I will do my time like others on our team and I will thank you when you work late.  My personal time is important to me, therefore, it must also be important to you.
  • pay attention to my little tester light bulb that occasionally goes off with new thoughts.  I will attempt to blog about these thoughts on my personal blog during non-work time.

How about you?  Any Tester New Year’s Resolutions?



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.