Eight Things You May Not Need To Test
3 comments Posted by Eric Jacobson at Friday, January 20, 2012This article will be published in a future addition of the Software Test Professionals Insider – community news. I didn’t get a chance to write my blog post this week so I thought I would cheat and publish it on my own blog first.
I will also be interviewed about it on Rich Hand’s live Blog Talk Radio Show on Tuesday, January 31st at 1PM eastern time.
My article is below. If it makes sense to you or bothers you, make sure you tune in to the radio show to ask questions…and leave a comment here, of course.
Don’t Test It
As testers, we ask ourselves lots of questions:
- What is the best test I can execute right now?
- What is my test approach going to be?
- Is that a bug?
- Am I done yet?
But how many of us ask questions like the following?
- Does this Feature need to ever be tested?
- Does it need to be tested by me?
- Who cares if it doesn’t work?
In my opinion, not enough of us ask questions like the three above. Maybe it’s because we’ve been taught to test everything. Some of us even have a process that requires every Feature to be stamped “Tested” by someone on the QA team. We treat testing like a routine factory procedure and sometimes we even take pride in saying...
“I am the tester. Therefore, everything must be tested...by me...even if a non-tester already tested it...even if I already know it will pass...even if a programmer needs to tell me how to test it...I must test it, no exceptions!”
This type of thinking may be giving testers a bad reputation. It emphasizes testing importance because of a thoughtless process rather than a service to provide the most valuable information to someone.
James Bach came up with the following test execution heuristic:
Basic Heuristic: “If it exists, I want to test it”
I disagree with that heuristic, as it is shown above and often published. However, I completely agree with the full version James published when he introduced it in his 7/8/2006 blog post:
“If it exists, I want to test it. (The only exception is if I have something more important to do.)”
The second sentence is huge! Why? Because often we do have something more important to do, and it’s usually another test! Unfortunately, importance is not always obvious. So rather than measuring importance, I like to ask the three questions above and look for things that may not be worth my time to test. Here are eight examples of what I’m talking about:
- Features that don’t go to production - My team has these every iteration. These are things like enhancements to error logging tables or audit reports to track production activity. On Agile teams these fall under the umbrella of Developer User Stories. The bits literally do not go to production and by their nature cannot directly affect users.
- Patches for critical production problems that can’t get worse - One afternoon our customers called tech support indicating they were on the verge of missing a critical deadline because our product had a blocking bug. We had one hour to deliver the fix to production. The programmer had the fix ready quickly and the risk of further breaking production was insignificant because production was currently useless. Want to be a hero? Don’t slow things down. Pass it through to production. Test it later if you need to.
- Cosmetic bug fixes with timely test setup - We fixed a spelling mistake that had shown up on a screen shot of a user error message. The user was unaware of the spelling mistake but we fixed it anyway; quick and easy. Triggering said error message required about 30 minutes of setup. Is it worth it?
- Straight forward configuration changes - Last year our product began encountering abnormally large production jobs it could not process. A programmer attempted to fix the problem with an obvious configuration change. There was no easy way to create a job large enough to cross the threshold in the QA environment. We made the configuration change in production and the users happily did the testing for us.
- Too technical for a non-programmer to test - Testing some functionality requires performing actions while using breakpoints in the code to reproduce race conditions. Sometimes a tester is no match for the tools and skills of a programmer with intimate knowledge of the product code. Discuss the tests but step aside.
- Non-tester on loan - If a non-tester on the team is willing to help test, or better yet, wants to help test a certain Feature, take advantage of it. Share test ideas and ask for test reports. If you’re satisfied, don’t test it.
- No repro steps - Occasionally a programmer will take a stab at something. There are often errors reported for which nobody can determine the reproduction steps. We may want to regression test the updated area, but we won’t prevent the apparent fix from deploying just because we don’t know if it works or not.
- Inadequate test data or hardware - Let’s face it. Most of us don’t have as many load balanced servers in our QA environment as we do in production. When a valid test requires production resources not available outside of production, we may not be able to test it.
Many of you are probably trying to imagine cases where the items above could result in problems if untested. I can do that too. Remember, these are items that may not be worth our time to test. Weigh them against what else you can do and ask your stakeholders when it’s not obvious.
If you do choose not to test something, it’s important not to mislead. Here is the approach we use on my team. During our Feature Reviews, we (testers) say, “we are not going to test this”. If someone disagrees, we change our mind and test it. If no one disagrees, we “rubber stamp” it. Which means we indicate nothing was tested (on the work item or story) and pass it through so it can proceed to production. The expression “rubber stamping” came from the familiar image of an administrative worker rubber stamping stacks of papers without really spending any time on each. The rubber stamp is valuable, however. It tells us something did not slip through the cracks. Instead, we used our brains and determined our energy was best used elsewhere.
So the next time you find yourself embarking on testing that feels much less important than other testing you could be doing, you may want to consider...not testing it. In time, your team will grow to respect your decision and benefit from fewer bottlenecks and increased test coverage where you can actually add value.
Sometimes, the most feasible way to test something, is to let it soak in an active test environment for several weeks. Examples:
- No repro steps but general product usage causes data corruption. We think we fixed it. Release the fix to an active test environment, let it soak, and periodically check for data corruption.
- A scheduled job runs every hour to perform some updates on our product. We tested the hourly job, now let’s let it run for two weeks in an active test environment. We expect each hourly run to be successful.
Per Google, soak testing involves observing behavior whilst under load for an extended period of time. In my case, load is normally a handful of human testers, as opposed to a large programmatic load of thousands. Nevertheless, the term is finally catching on within my product teams.
Who cares about the term? I like it because it honestly describes the tester effort, which is very little. It does not mislead the team into thinking testers are spending much time investigating something. It’s almost like not testing. But yet, we still plan to observe from time to time and eventually make an assessment of success or failure.
Be sure to over-annunciate the “k” in “soak”. People on my team thought I was saying “soap” test. I’m not sure what a soap test is…but I’m sure it exists too!
Don’t Test It #2 – Programmer-Logged Bugs
1 comments Posted by Eric Jacobson at Monday, May 02, 2011When programmers log bugs, us testers are grateful of course. But when programmer-logged bugs travel down their normal work flow and fall into our laps to verify, we’re sometimes befuddled…
”Hey! Where are the repro steps? How can I simulate that the toolbar container being supplied is not found in the collection of merged toolbars?”
I used to insist that every bug fix be tested by a tester. No exceptions! Some of these programmer-logged bugs were so technical, I had to hold the programmer’s hand through my entire test and test it the same way the programmer already tested it. This is bad because my test would not find out anything new. Later I realized I’m not only wasting the programmer’s time, I’m also wasting my time; from other new tests I could be executing.
Sometimes, it’s still good to waste time for the sake of understanding but don’t make it a hard and fast rule for everything. Instead, you may want to do as follows:
- Ask the programmer how they fixed it and tested their fix. Does it sound reasonable?
- Ensure the critical regression tests will be run in the patched module, before production deployment.
Then rubber stamp the bug and spend your time where you can be more helpful.
Don’t Test It #1 - Crisis In Production
9 comments Posted by Eric Jacobson at Friday, March 25, 2011I find it belittling…the notion that everything must be tested by a tester before it goes to production. It means we test because of a procedure rather than to provide information that is valuable to somebody.
This morning our customers submitted a large job to one of our software products for processing. The processed solution was too large for our product’s output. So the users called support saying they were dead in the water and on the verge of missing a critical deadline. We had one hour to deliver the fix to production.
The fix, itself, was the easy part. A parameter needed its value increased. The developer performed said fix then whipped up a quick programmatic test to ensure the new parameter value would support the users’ large job. Per our process, the next stop was supposed to be QA. Given the following information I attempted to bypass QA and release the change straight to production:
- Testers would not be able to generate a large enough job, resembling that in production, in the available time given.
- There was no QA environment mirroring production bits and data at this time. It would have been impossible to stand one up before the one hour deadline.
- The risk of us breaking production by increasing said parameter was insignificant because production was already non-usable (i.e., it would be nearly impossible for this patch to make production worse than it already was).
Even with the above considerations, some on the team reacted with horror…”What? No Testing?”. When I mentioned it had been tested by a developer and I was comfortable with said test, the response was still “A tester needs to test it”.
After convincing the process hawks it was not feasible for a tester to test, our next bottleneck was deployment. Some on the team insisted the bits go to a QA environment first, even though it would not be tested. This was to keep the bits in sync across environments. I agree with keeping the bits in sync, but how about worrying about that once we get our users safely through their crisis!
As I watched the email thread explode with process commentary and waited for the fix to jump through the hoops, I also listened to people who were in touch with the users. The users were escalating the severity of their crisis and reminding us of its urgency.
I believe those who insist everything must be tested by a tester do us a dis-service by making our job a thoughtless process instead of a sapient service.

RSS