What if the test documentation is written per the detail level in this post… 
Document enough detail such that you can explain the testing you did, in-person, up to a year later.
…and the tester who wrote it left the company? 
Problem #1:
How are new testers going to use the old test documentation for new testing?  They aren’t, and that’s a good thing.

Add test detail as late as possible because things change.  If you agree with that heuristic, you can probably see how a tester leaving the company would not cause problems for new testers.

  • If test documentation was written some time ago, and our system-under-test (SUT) changed, that documentation might be wrong.
  • Let’s suppose things didn’t change.  In that case, it doesn’t matter if the tester left because we don’t have to test anything.
  • How about the SUT didn’t change but an interfacing system did.  In this case, we may feel comfortable using the old test documentation (in a regression test capacity).  In other words, let’s talk about when the write-tests-as-late-as-possible heuristic is wrong and the test documentation author left the company.  If you agree that a test is something that happens in one’s brain and not a document, wouldn’t we be better off asking our testers to learn the SUT instead of copying someone else’s detailed steps?  Documentation, at this level of detail, might be a helpful training aid, but it will not allow an unskilled tester to turn off their brain.  Get it?
Problem #2:
How are people going to understand old test documentation if the tester who left, must be available to interpret?  They won’t understand it in full.  They will probably understand parts of it.  An organization with high tester turnover and lots of audit-type inquiries into past testing, may need more than this level of detail.
But consider this: Planning all activities around the risk that an employee might leave, is expensive.  A major trade-off of writing detailed test documentation is lower quality.

(and one of Michael Bolton’s)
One of my testers took James Bach’s 3-day online Rapid Testing Intensive class.  I poked my head in from time to time, couldn’t help it.  What struck me is how metaphor after metaphor dripped from Bach’s mouth like poetry.  I’ve heard him speak more times than I can count but I’ve never heard such a spontaneous panoply of beautiful metaphors.  Michael Bolton, acting as assistant, chimed in periodically with his own metaphors.  Here are some from the portions I observed:

  • A tester is like a smoke alarm, their job is to tell people when and where a fire is.  However, they are not responsible for telling people how to evacuate the building or put out the fire.
  • (Michael Bolton) Testers are like scientists.  But scientists have it easy;  They only get one build.  Testers get new builds daily so all bets are off on yesterday’s test results.
  • Buying test tools is like buying a sweater for someone.  The problem is, they feel obligated to wear the sweater, even if it’s not a good fit.
  • Testers need to make a choice; either learn to write code or learn to be charming.  If you’re charming, perhaps you can get a programmer to write code for you.   It’s like having a friend who owns a boat.
  • Deep vs. Shallow testing.  Some testers only do Shallow testing.  That is like driving a car with a rattle in the door…”I hear a rattle in the door but it seems to stay shut when I drive so…who cares?”.
  • Asking a tester how long it will take to test is like being diagnosed with cancer and asking the doctor how long you have to live.
  • Asking a tester how long the testing phase will last is like asking a flight attendant how long the flight attendant service will last.
  • Complaining to the programmers about how bad their code looks is like being a patron at a restaurant and walking back into the kitchen to complain about the food to the chefs.  How do you think they’re going to take it?
  • Too many testers and test managers want to rush to formality (e.g., test scripts, test plans).  It’s like wanting to teleport yourself home from the gym.  Take the stairs!
Thanks, James.  Keep them coming.

Well, that depends on what your clients need.  How do you know what your clients need?  I can think of two ways:

  1. Ask them.  Be careful. Per my experience, clients inflate their test documentation needs when asked.  Maybe they’re afraid they’ll insult the tester if they don’t ask for lots of test documentation.  Maybe they’re intent to review test cases is stronger than their follow-through.
  2. Observe them.  Do they ever ask for test case reviews?  If you are transparent with your test documentation, do your clients ever give you feedback?  Mine don’t (unless I initiate it).

Here is what I ask of my testers:

Document enough detail such that you can explain the testing you did, in-person, up to a year later.

Before I came up with the above, I started with this:  The tester should be present to explain their testing.  Otherwise, we risk incorrect information. 

If the tester will be present, why would we sacrifice test time to write details that can easily be explained by the tester?  In my case, the documentation serves to remind the tester.   When we review it with programmers, BA’s, users, other testers, auditors, or if I review it myself, the tester should always be present to interpret.

What if the tester leaves?

I’ll talk about that in the next post.

Last week, Alex Kell (Atlanta-based tester and my former boss) gave a fun talk at Software Testing Club Atlanta, ”The Oracle is Fallible: Recognizing, Understanding, and Evaluating the Assumptions that Testers Make”.

File:John William Waterhouse oracle 1884.png

Here are the highlights from my notes:

  • After showing John William Waterhouse’s famous 1884 painting, Consulting the Oracle (above), of 7 priestesses listening to an 8th priestess (playing the Oracle) interpret from the gods, Alex asked:
  • “Assumptions, are they bad or good?”
    • We make them because we’re lazy.
    • Sometimes we know we’re making an assumption, sometimes we don’t know.
    • After some discussion and examples of assumptions we realized we are all constantly making assumptions every waking moment and decided assumptions can be good or bad.
  • Bad assumptions (or forgetting the Oracle is fallible):
    • “The spec is correct.” – Be careful.  Remember Ron Jeffries “Three Cs”:
      • The Spec (AKA “Card”) is a reminder to have a conversation about something.
      • The Conversation is a discussion of the details that result in test confirmations.
      • The Confirmation is acceptance criteria that can be turned into acceptance tests.
    • “They know what they’re doing.” – What if everybody on the team is thinking this…group think?
    • “I know what I’m doing.”
    • “The software is working because we haven't seen a (red) failed test.”  (Dennis Stevens says “Every project starts red, and it stays red, until it is green.”)
    • “The model is reality.” – A model is an abstraction.  All decisions based on models are based on assumptions.  You should never be surprised when a model does not turn out to reflect reality.  Author, Nassim Nicholas Taleb, coined the word “platonicity” to describe the human tendency to find patterns in everything.

      Alex gave a nearly literal example of where people fall victim to this bad assumption: He told of people on Craig’s List (or similar) paying money for things like actual cars one can drive, only to discover they had just purchased a scaled down model of a car.
  • Good assumptions (I loved this and thought it was pretty bold to declare some assumptions being good for testers):
    • “The estimates are accurate”. – Take what you did last time.  Use the estimate until it is no longer helpful.
    • “The web service will honor its contract”. If testers didn’t make this assumption, might they be wasting time testing the wrong thing? 
    • There were more good assumptions but I have a gap in my notes.  Maybe Alex will leave a comment with those I missed.
  • Alex talked about J.B. Rainsberger’s “Integrated Tests Are a Scam” – In other words, if we don’t make some assumptions, we would have to code tests for the rest of our lives to make a dent in our coverage.
  • Suggestions to deal with assumptions:
    • Be explicit about your assumptions.
    • Use truth tables for complex scenarios (Alex shared one he used for his own testing).
    • System thinking – Testers should be able to explain the whole system.  This cuts down on bad assumptions.



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.