One of my tester colleagues and I had an engaging discussion the other day. 

If a test failure is not caused by a problem in the system-under-test, should the tester bother to say the test failed? 

My position is: No. 

If a test fails but there is no problem with system-under-test, it seems to me it’s a bad test.  Fix the test or ignore the results.  Explaining that a test failure is nothing to be concerned with, gives the project team a net gain of nothing.  (Note: If the failure has been published, my position changes; the failure should be explained).

The context of our discussion was the test automation space. I think test automaters, for some reason, feel compelled to announce automated check failures in one breath, and in the next, explain why these failures should not matter.  “Two automated checks failed…but it’s because the data was not as expected, so I’m not concerned” or “ten automated checks are still failing but it’s because something in the system-under-test changed and the automated checks broke…so I’m not concerned”. 

My guess is, project teams and stakeholders don’t care if tests passed or failed.  They care about what those passes and failures reveal about the system-under-test.  See the difference?

Did the investigation of the failed test reveal anything interesting about the system-under-test?  If so, share what it revealed.  The fact that the investigation was triggered by a bad test is not interesting.

If we’re not careful, Test Automation can warp our behavior. IMO, a good way of understanding how to behave in the test automation space, is to pretend your automated checks are sapient (AKA “manual”) tests.  If a sapient tester gets different results than they expected, but later realizes their expectations were wrong, they don’t bother to explain their recent revelation to the project team.  A sapient tester would not say, “I thought I found a problem, but then I realized I didn’t”?  Does that help anyone?

My system 1 thinking says “no”.  I’ve often heard separation of duties makes testers valuable.

Let’s explore this.

A programmer and a tester are both working on a feature requiring a complex data pull.  The tester knows SQL and the business data better than the programmer.

If Testers Write Source Code:

The tester writes the query and hands it to the programmer.  Two weeks later, as part of the “testing phase”, the tester tests the query (they wrote themselves) and finds 0 bugs.  Is anything dysfunctional about that? 

If Testers do NOT Write Source Code:

The programmer struggles but manages to cobble some SQL together.  In parallel, the tester writes their own SQL and puts it in an automated check.  During the “testing phase”, the tester compares the results of their SQL with that of the programmer’s and finds 10 bugs.  Is anything dysfunctional about that? 

After RST class (see my Four Day With Michael Bolton post), Bolton did a short critical thinking for testers workshop.  If you get an opportunity to attend one of these at a conference or other place, it’s time well spent.  The exercises were great, but I won’t blog about them because I don’t want to give them away.  Here is what I found in my notes…

  • There are two types of thinking:
    1. System 1 Thinking – You use it all the time to make quick answers.  It works fine as long as things are not complex.
    2. System 2 Thinking – This thinking is lazy, you have to wake it up.
  • If you want to be excellent at testing, you need to use System 2 Thinking.  Testing is not a straight forward technical problem because we are creating stuff that is largely invisible.
  • Don’t plan or execute tests until you obtain context about the test mission.
  • Leaping to assumptions carries risk.  Don’t build a network of assumptions.
  • Avoid assumptions when:
    • critical things depend on it
    • when the assumption is unlikely to be true
    • the assumption is dangerous when not declared
  • Huh?  Really?  So?   (James Bach’s critical thinking heuristic)
    • Huh? – Do I really understand?
    • Really? – How do I know what you say is true?
    • So? – Is that the only solution?
  • “Rule of Three” – If you haven't thought of at least three plausible explanations, you’re not thinking critically enough.
  • Verbal Heuristics: Words to help you think critically and/or dig up hidden assumptions.
  • Mary Had a Little Lamb Heuristic – emphasize each word in that phrase and see where it takes you.
  • Change “the” to “a” Heuristic:
    • “the killer bug” vs. “a killer bug”
    • “the deadline” vs. “a deadline”
  • “Unless” Heuristic:  I’m done testing unless…you have other ideas
  • “Except” Heuristic:  Every test must have expected results except those we have no idea what to expect from.
  • “So Far” Heuristic:  I’m not aware of any problems…so far
  • “Yet” Heuristic: Repeatable tests are fundamentally more valuable, yet they never seem to find bugs.
  • “Compared to what?” Heuristic: Repeatable tests are fundamentally more valuable…compared to what?
  • A tester’s job is to preserve uncertainty when everyone around us is certain.
  • “Safety Language” is a precise way of speaking which differentiates between observation and inference.  Safety Language is a strong trigger for critical thinking.
    • “You may be right” is a great way to end an argument.
    • “It seems to me” is a great way to begin an observation.
    • Instead of “you should do this” try “you may want to do this”.
    • Instead of “it works” try “it meets the requirements to some degree”
    • All the verbal heuristics above can help us speak precisely.

See Part 1 for intro.

  • People don’t make decisions based on numbers, they do so based on feelings (about numbers).
  • Asking for ROI numbers for test automation or social media infrastructure does not make sense because those are not investments, those are expenses.  Value from an automation tool is not quantifiable.  It does not replace a test a human can perform.  It is not even a test.  It is a “check”.
  • Many people say they want a “metric” when what they really want is a “measurement”.  A “metric” allows you to stick a number on an observation.  A “measurement”, per Jerry Weinberg, is anything that allows us to make observations we can rely on.  A measurement is about evaluating the difference between what we have and what we think we have.
  • If someone asks for a metric, you may want to ask them what type of information they want to know (instead of providing them with a metric).
  • When something is presented as a “problem for testing”, try reframing it to “a problem testing can solve”.
  • Requirements are not a thing.  Requirements are not the same as a requirements document.  Requirements are an abstract construct.  It is okay to say the requirements document is in conflict with the requirements.  Don’t ever say “the requirements are incomplete”.  Requirements are not something that can be incomplete.  Requirements are complete before you even know they exist, before anyone attempts to write a requirements document.
  • Skilled testers can accelerate development by revealing requirements.  Who cares what the requirement document says.
  • When testing, don’t get hung up on “completeness”.  Settle for adequate.  Same for requirement documents.  Example: Does your employee manual say “wear pants to work”?  Do you know how to get to your kid’s school without knowing the address?
  • Session-Based Test Management (SBTM) emphasizes conversation over documentation.  It’s better to know where your kid’s school is than to know the address.
  • SBTM requires 4 things:
    • Charter
    • Time-boxed test session
    • Reviewable results
    • Debrief
  • The purpose of a program is to provide value to people.  Maybe testing is more than checking.
  • Quality is more than the absence of bugs.
  • Don’t tell testers to “make sure it works”.  Tell them to “find out where it won’t work.”  (yikes, that does rub against the grain with my We Test To Find Out If Software *Can* Work post, but I still believe each)
  • Maybe when something goes wrong in production, it’s not the beginning of a crisis, it’s the end of an illusion.

image

After failing in production for a third time, the team lead’s passive aggressive tendencies became apparent in his bug report title.  Can you blame him?

It all depends on context, of course.  But if three attempts to get something working in production still fail…there may be a larger problem somewhere. 

That got me thinking.  Maybe we should add passive aggressive suffixes for all our  “escapes” (bugs not caught in test).  It would serve to embarrass and remind ourselves that we can do better.

  • “…fail I” would not be so bad.
  • “…fail II” would be embarrassing.
  • “…fail III” should make us ask for help testing and coding.
  • “…fail IV” should make us ask to be transferred to a more suitable project.
  • by “…fail V” we should be taking our users out to lunch.
  • “…fail VI” I’ve always wanted to be a marine biologist, no time like the present.



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.