“I’m just the tester, if it doesn’t run it’s not my problem, it’s the deployment team’s problem. I can tell you how well it will work, but first you’ve got to deploy it properly.”

One of the most difficult problems to prevent is a configuration problem; a setting that is specific to production.  You can attempt perfect testing in a non-production environment, but as soon as your Config Management guys roll it out to prod with the prod config settings, the best you can do is cross your fingers (unless you’re able to test in prod).

After a recent prod server migration, my config management guys got stuck scrambling around trying to fix various prod config problems.  We had all tested the deployment scripts in multiple non-prod environments.  But it still didn’t prepare us for the real thing.

It’s too late for testers to help now.

I’ve been asking myself what I could have done differently.  The answer seems to be, asking/executing more hypothetical questions/tests, like:

  • If this scheduled nightly task fails to execute, how will we know?
  • If this scheduled nightly task fails to execute, how will we recover?

But often I skip the above because I’m so focused on:

  • When this scheduled nightly task executes, does it do what it’s supposed to do?

The hypotheticals are difficult to spend time on because we, as testers, feel like we’re not getting credit for them.  We can’t prevent the team from having deployment problems.  But maybe we can ask enough questions to prepare them for the bad ones.

I figured it was time for a review of some modern testing terms.  Feel free to challenge me if you don’t like my definitions, which are very conversational.  I selected terms I find valuable and stayed away from terms I’m bored with (e.g., “Stress Testing”, “Smoke Testing”). 

Afterwards, you can tell me what I’m missing.  Maybe I’ll update the list.  Here we go…

Tester – Never refer to yourself as QA.  That’s old school.  That’s a sign of an unskilled tester.  By now, we know writing software is different than manufacturing cars.  We know we don’t have the power to “assure” quality.  If your title still has “QA” in it, convince your HR department to change it.  Read this for more.

Sapient Tester – A brain-engaged tester.  It is generally used to describe a skilled tester who focuses on human “testing” but uses machines for “checking”.  See James Bach’s post.

Manual Tester – A brain-dead tester.  Manual testers focus on “checking”.
Test (noun) – Something that can reveal new information.  Something that takes place in one’s brain.  Tests focus on exploration and learning.  See Michael Bolton’s post.

Check – An observation, linked to a decision rule, resulting in a bit (e.g., Pass/Fail, True/False, Yes/No).  Checks focus on confirmation.  A check may be performed by a machine or a human.  Repetition of the same check is best left to a machine, lest the tester becomes a “Manual Tester”, which is not cool.  See Michael Bolton’s posts, start here.

Developer – It takes a tester, business analyst, and programmer to develop software; even if they’re just different hats on the same person.  That means if you’re a tester, you’re also a developer.

Programmer – Person on the development team responsible for writing the product code.  They write code that ships.

Prog – Short version of “Programmer”.  See my post.

Test Automation Engineer – This is a Tester who specializes in writing automated checks.  This is the best I have so far.  But here are the problems I have with it.  Test Automation Engineers are also programmers who write code.  That means the term “Programmer” is ambiguous.  A Test Automation Engineer has the word “Test” in their title when, arguably, a test can’t be automated.

Heuristic - a fallible method for solving a problem or making a decision.  Like a rule of thumb.  It's fallible though, so use it with care. Why is this term in a tester dictionary?  Skilled testers use heuristics to make quick decisions during testing.  For example: a tester may use a stopping heuristic to know when to stop a test or which test to execute next.  Testers have begun capturing the way they solve problems and creating catchy labels for new heuristics.  Said labels allow testers to share ideas with other testers.  Example: the 'Just In Time Heuristic' reminds us to add test detail as late as possible, because things will change.  Example: the' Jenga Heuristic' reminds us that if we remove too many dependencies from a test, it will easily fall down...instead, try removing one dependency at a time to determine the breaking point.

Test Report – Something a team member or manager may ask a tester for.  The team member is asking for a summary of a tester’s findings thus far.  Skilled testers will have a mnemonic like MCOASTER or MORE BATS to enable a quick and thorough response.

Context Driven Testing – an approach to software testing that values context. Example: when joining a new project, Context Driven testers will ask the team what level of documentation is required, as opposed to just writing a test plan because that is what they have always done.  IMO, Context Driven testers are the innovators when it comes to software testing.  They are the folks challenging us to think differently and adjust our approaches as the IT industry changes.  See Context Driven Testing.

Bug – Something that bugs someone who matters.

Issue – It may result in a bug.  We don’t have enough information to determine that yet.

Escape – A bug found in production.  A bug that has “escaped” the test environment.  Counting “escapes” may be more valuable than counting “bugs”.

Follow-on Bug – A bug resulting from a different bug.  “we don’t need to log a bug report for BugA because it will go away when BugB gets fixed”.  I first heard it used by Michael Hunter (I think).

Safety Language – Skilled testers use it to tell an honest accurate story of their testing and preserve uncertainty.  Example: “This appears to meet the requirements to some degree”, “I may be wrong”.  See my post.

Test Idea – less than 140 characters.  Exact steps are not necessary.  The essence of a test should be captured.  Each test ideas should be unique among their set.  The purpose is to plan a test session without spending too much time on details that may change.  Test Ideas replace test cases on my team.

Test Case Fragment – see “Test Idea”.  I think they are the same thing.

AUT – Application Under Test.  The software testers are paid to test.  See my post and read the comments to see why I like AUT better than competing terms.

Showstopper – An annoying label, usually used to define the priority of bugs.  It is typically overused and results in making everything equally important.  See my post.

Velocity, Magnitude, Story Points – Misunderstood measurements of work on agile development teams.  Misunderstood because Agile consultants do such a poor job of explaining them.  So just use these terms however you want and you will be no worse off than most Agile teams.

Session-Based-Test-Management (SBTM) – A structured approach to Exploratory Testing that helps testers be more accountable.  It involves dividing up test work into time-based charters (i.e., missions), documenting your test session live, and reviewing your findings with a team member.  The Bach brothers came up with this, I think.  Best free SBTM tool, IMO, is Rapid Reporter.

Come on testers, let’s make up our minds and all agree on one term to refer to the software we are testing.  The variety in use is ridiculous.

I’ve heard the following used by industry experts:

  • PUT (Product Under Test)
  • SUT (System Under Test)
  • AUT (Application Under Test)
  • Product, Software, Application, etc.

Today I declare “SUT” the best term for this purpose! 

Here’s my reasoning: “PUT” could be mistaken for a word, not an acronym.  “AUT” can’t easily be pronounced aloud.  “SUT” could be translated to “Software Under Test” or “System Under Test”, but each honor the intent. The software we are paid to test is a “Product”…but so is Quick Test Pro, Visual Studio, and SQL Server.

“What’s the big deal with this term?” you ask.  Without said term, we speak ambiguously to our team members because we operate and find bugs in all classes of software:

  • the software we are paid to test
  • the software we write to test the software we are paid to test (automation)
  • the software we write our automation with (e.g., Selenium, Ruby)
  • the software we launch the software we are paid to test from (e.g., Window7, iOS)

If we agree to be specific.  Let’s also agree to use the same term.  Please join me and start using “SUT”.

When bugs escape to production, does your team adjust?

We started using the following model on one of my projects.  It appears to work fairly well.  Every 60 days we meet and review the list of “escapes” (i.e., bugs found in production).  For each escape, we ask the following questions:

  1. Could we do something to catch bugs of this nature?
  2. Is it worth the extra effort?
  3. If so, who will be responsible for said effort?

The answer to #1 is typically “yes”. Creative people are good at imagining ultimate testing. It’s especially easy when you already know the bug.  There are some exceptions though. Some escapes can only be caught in production (e.g., a portion of our project is developed in production and has no test environment).

The answer to #2 is split between “yes” and “no”.  We may say “yes” if the bug has escaped more than once, significantly impacts users, or when the extra effort is manageable.  We may say “no” when a mechanism is in place to alert our team of the prod error; we can patch some of these escapes before they affect users, with less effort than required to catch them in non-prod environments.

The answer to #3 falls to Testers, Programmers, BAs, and sometimes both or all.

So…when bugs escape to production, does my team adjust?  Sometimes.

We had a seemingly easy feature to test: users should be able to rearrange columns on a grid.  My test approach was to just start rearranging columns at random

My colleague’s test approach was different.  She gave herself a nonsensical user scenario to complete.  Her scenario was to rearrange all the columns to appear in alphabetical order (by column header label) from left to right.   Pretty stupid, I thought to myself.  Will users ever do that? No.  And it seems like a repetitive waste of time.

Since I had flat-lined with my own approach, I tried her nonsensical user scenario myself…figured I’d see how stupid it was.  As I progressed through the completion of the nonsensical user scenario, it started opening test case doors:

  • I’m getting good at this rearranging column thing, maybe I can go faster…wait a minute, what just happened?
  • I’ve done this step so many times, maybe I can pay more attention to other attributes like the mouse cursor…oh, that’s interesting.
  • There’s no confusion about what order I’ve placed the columns in, now I can easily check that they remained in that order.
  • I’m done with letter “E”.  I think I saw a column starting with a letter “F” off the screen on the far right.  I’m going to have to use the horizontal scroll bar to get over there.  What happens when I drag my “F” column from the right to the left and then off the screen?

Now I get it!  The value in her nonsensical user scenario was to discover test cases she may not have otherwise discovered.  And she did.  She found problems placing a column halfway between the left-most and right-most columns.

A nonsensical user scenario gives us a task to go perform on the system under test.  Having this task may open more doors than mere random testing.



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.