You probably think this is just another post about how important it is to test everything before it goes to production. Nope.

Testers are too often expected to test things they don’t have a chance in hell at testing. Testing for the sake of process or perception, provides little to no value and degrades the whole trade of testing.

Sometimes when people ask,

“What are the testers going to do to test it?”, I respond,
“Nothing…we’ll just rubber-stamp it.”

Devs usually laugh because they know the bug risk is ridiculously low or the tester does not have the skills to test anything beyond what the dev already tested. However, other testers, managers, or BAs react with horror.

“Rubber-stamp it? Blasphemy! You’re the tester. EVERYTHING must be tested by you before going to production!”

The term “rubber-stamping” invokes negative reactions because it brings up the mental image of the desk clerk stamping document after document without paying any attention to what is on them…like the tester marking “Verified” on the bug or feature they didn’t really do anything to test. But that’s why I like the term! I’m trying to be honest about what value the tester added…none.

Here are some examples where the rubber-stamper-tester is justified:

  • The tester has inadequate skills to perform the tests but has interviewed someone else on the team (usually the devloper) who did test.
  • The test is not feasible to recreate in a non-prod environment (e.g., a complex multi-user scenario, custom PC environments, unknown repro steps)
  • The patch can only be verified in debug mode, using complex timing and coordination requiring breakpoints and other unrealistic tricks. Even devs may skip testing if regression tests pass.
  • If critical functionality is broken in prod, we may decide to release a fix without testing it. Speed becomes paramount in this scenario. We are smart...it is possible that we are smart enough to see a logic error, make the fix, take a deep breath and release to prod without the overhead of testing. After all, it can’t get any worse in prod, right?
It’s fun to say our job as testers is to be cynical, to not trust anybody. But we shouldn’t abuse that belief and become mere team bottle-necks, either.

Does one require more start-up time than the other?

My department is gearing up for a spike in development this summer. We plan to use temporary, contractor developers and testers.

IMO, the contractor devs do not need to know nearly as much about our systems as the contractor testers do. The devs will be able to make their updates with limited business domain or system knowledge. However, the testers will need to go way beyond understanding single modules; they will need extensive business domain and system knowledge in order to determine what integration tests to write/execute.

Some on my team have a notion that testers can come in and be valuable by running simple tests, with limited knowledge. It scares me so much I would almost consider giving up my entire summer, moving into the office, and doing all the testing myself.

I’m also wondering if contractor testers should be paired with veteran devs and vice versa. If we have contractor testers working with contractor developers, as is the plan, it sure seems like a recipe for disaster.

Have any of you experienced something similar? What advice do you have for me?


I’ve spent a good deal of time underground the last 13 years…literally. One of my favorite weekend activities is caving. New caves are discovered nearly every weekend in the northwest Georgia area and responsible cavers survey these caves, make maps, then submit the data to their state’s speleological survey library.

Cavers are very methodical when it comes to finding virgin passage. Underground ethics specify that cavers survey (with tape, compass, clinometer, and sketchbook) the new passage as they explore it. It is frowned upon to just run through a new cave without performing a proper survey on the way in. Exploring without an initial survey is known as “scooping” or “eye-raping” a cave and it is a sign of an irresponsible caver (sometimes called a “spelunker”).

The responsible caver learns everything about the new cave by surveying as they go. The survey process forces them to examine all features of the passage carefully, often discovering new leads, which otherwise would have been missed. This patient approach keeps the caver fresh with anticipation about the wonderful cave lying ahead.

The irresponsible caver, who runs down virgin passage into the dark unknown, only experiences the obvious way forward. They assume they will backtrack to check for leads. In practice, they may grow fatigued or bored with the cave and never return. They have not collected enough data to qualify the cave with the state survey. They can brag to friends about a deep pit and borehole passage. But they cannot tell other cavers to bring a 250-foot rope because the pit is 295-feet deep, or that the big formation room is a half mile in on the northeast end of a 40-foot wide dome room. They don’t know which leads have been checked or how likely it is that this cave drains into a nearby cave further down the mountain. They have no hard facts about the cave; only memories, which fade very quickly.

A tester's approach to new AUT (Application Under Test) software features should be much the same as those cavers who survey as they explore. As a tester, my tools are my tests. And yes, for complex scenarios, I like to write the test before I perform it. At times, I want to scoop the application, to find the bugs before the other team members do. But I try to reign myself in. I keep track of what I have checked as I go in. I remember how satisfying it is to present team members with a list of tests performed and their results; so much more satisfying than saying “I’m done testing this”.

If your AUT has logic that checks for null values, make sure removed values get set back to null. If they get set to blank, you may have a nice crunchy bug.

Here are the pseudo steps for the test case using a generic order form example. But you can use anything in your AUT that programmatically makes a decision based on whether or not some value exists.

Feature: Orders with missing line item quantities can no longer be submitted.

  1. Create a new line item on an order. Do not specify a quantity. Crack open the DB and find the record with the unpopulated quantity. Let’s assume the quantity has a null value.
  2. Populate the quantity via the AUT .
  3. Clear the quantity via the AUT. Look at the DB again. Does the quantity have a blank or null value?

    Expected Results: Quantity has a null value.

If your dev only validates against null quantity values, your user just ordered nothing...yikes!



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.