A tester asked me an interesting question this morning:

“How can I find old test documentation for a completed feature so I can re-use those tests on a similar new feature?”

The answer is easy.  But that’s not what this post is about. 

It seems to me, a skilled tester can usually come up with better tests…today, from scratch.  Test documentation gets stale fast.  These are some reasons I can think of:

  • A skilled tester knows more about testing today than they did last month.
  • A skilled tester knows more about the product-under-test today than they did last month.
  • The product-under-test is different today than it was last month.  It might have new code, refactored code, more users, more data, a different reputation, a different platform, a different time of the year, etc.
  • The available time to perform tests might be different.
  • The test environment might be different.
  • The product coder might be different.
  • The stakeholders might be different.
  • The automated regression check suite may be different.

If we agree with the above, we’ll probably get better testing when we tailor it to today’s context.  It’s also way more fun to design new tests and probably quicker (unless we are talking about automation, which I am not).

So I think digging up old test documentation as the basis for determing which tests to run today, might be a wrong reason to dig up old test documentation.  A good reason is to answer questions about the testing that was performed last month.

While reading Paul Bloom’s The Baby In The Well article in The New Yorker, I noted the Willie Horton effect’s parallel to software testing:

In 1987, Willie Horton, a convicted murderer who had been released on furlough from the Northeastern Correctional Center, in Massachusetts, raped a woman after beating and tying up her fiancĂ©. The furlough program came to be seen as a humiliating mistake on the part of Governor Michael Dukakis, and was used against him by his opponents during his run for President, the following year. Yet the program may have reduced the likelihood of such incidents. In fact, a 1987 report found that the recidivism rate in Massachusetts dropped in the eleven years after the program was introduced, and that convicts who were furloughed before being released were less likely to go on to commit a crime than those who were not. The trouble is that you can’t point to individuals who weren’t raped, assaulted, or killed as a result of the program, just as you can’t point to a specific person whose life was spared because of vaccination.

How well was a given application tested?  Users don’t know what problems the testers saved them from.  The quality may be celebrated to some extent, but one production bug will get all the press.

If you find an escape (i.e., a bug for something marked “Done”), you may want to develop an automated check for it.  In a meeting today, there was a discussion about when the automated check needed to be developed?  Someone asked, “Should we put a task on the product backlog?”.  IMO:

The automated check should be developed when the bug fix is developed.  It should be part of the “Done” criteria for the bug.

Apply the above heuristically.  If your bug gets deffered to a future Sprint, deffer the automated check to that future Sprint.  If your bug gets fixed in the current Sprint, develop your automated check in the current Sprint.

If a tree falls in the forest and nobody hears it, does it make a sound?  If you have automated checks and nobody knows it, does it make an impact?

To me, the value of any given suite of automated checks depends on its usage...


spectrum

FeatureA will be ready to test soon.  You may want to think about how you will test FeatureA.  Let’s call this activity “Test Planning”.  In Test Planning, you are not actually interacting with the product-under-test.  You are thinking about how you might do it.  Your Test Planning might include, but is not limited to, the following:

  • Make a list of test ideas you can think of.  A Test Idea is the smallest amount of information that can capture the essence of a test.
  • Grok FeatureA:  Analyze the requirements document.  Talk to available people.
  • Interact with the product-under-test before it includes FeatureA.
  • Prepare the test environment data and configurations you will use to test.
  • Note any specific test data you will use.
  • Determine what testing you will need help with (e.g., testing someone else should do).
  • Determine what not to test.
  • Share your test plan with anyone who might care.  At least share the test ideas (first bullet) with the product programmers while they code.
  • If using automation, design the check(s).  Stub them out.

All the above are Test Planning activities.  About four of the above resulted in something you wrote down.  If you wrote them in one place, you have an artifact.  The artifact can be thought of as a Test Plan.  As you begin testing (interacting with the product-under-test), I think you can use the Test Plan one of two ways:

  1. Morph it into “Test Notes” (or “Test Results”).
  2. Refer to it then throw it away.

Either way, we don’t need the Test Plan after the testing.  Just like we don’t need those other above Test Planning activities after the testing.  Plans are more useful before the thing they plan.

Execution is more valuable than a plan.  A goal of a skilled tester is to report on what was learned during testing.  The Test Notes are an excellent way to do this.  Attach the Test Notes to your User Story.  Test Planning is throwaway.

My data warehouse team is adopting automated checking.  Along the way, we are discovering some doubters.  Doubters are a good problem.  They challenge us to make sure automation is appropriate.  In an upcoming meeting, we will try to answer the question in this blog post title.

My short answer:  Yes.

My long answer:  See below.

The following are data warehouse (or database) specific:

  • More suited to machines – Machines are better than humans at examining lots of data quickly. 
  • Not mentally stimulating for humans(this is the other side of the above reason) Manual DB testers are hard to find.  Testers tend to like front-ends so they gravitate toward app dev teams.  DB testers need technical skills (e.g., DB dev skills).  People who have them prefer to do DB dev work.
  • Straight forward repeatable automation patterns – For each new dimension table, we normally want the same types of automated checks.  This makes automated check design easier and faster to code.  The entire DW automation suite contains a smaller amount of design patterns than the average appliction.

The following are not limited to data warehouse (or database):

  • Time to market – (Automated checks) help you go faster.  Randy Shoup says it well at 9:55 in this talk.  Writing quick and dirty software leads to technical debt which leads to no time to do it right (“technical debt viscious cycle”).  Writing automated checks as you write software  leads to a solid foundation which leads to confidence which leads to faster and better (“virtuous cycle of quality”)…Randy’s words.
  • Regression checking - In general, machines are better than humans at indicating something changed. 
  • Get the most from your human testing - Free the humans to focus on deep testing of new features, not shallow testing of old features.
  • In case the business ever changes their mind - If you ever have to revist code to make changes or refactor, automated checks will help you do it quicker.  If you think the business will never change their mind, then maybe automation is not as important.
  • Automated checks help document current functionality.
  • Easier to fix problems - Automated checks triggered in a Continuous Integration find problems right after code is checked in.  These problems are usually easier to fix when fresh in a developer’s mind.

I read A Context-Driven Approach to Automation in Testing at the gym this morning.  I expected the authors to hate on automation but they didn’t.  Bravo.

They contrasted the (popular) perception that automation is cheap and easy because you don’t have to pay the computer, with the (not so popular) perception that automation requires a skilled human to design, code, maintain, and interpret the results of the automation.  That human also wants a paycheck.



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.