After watching Elisabeth Hendrickson’s CAST 2012 Keynote (I think), I briefly fell in love with her version of the “checking vs. testing” terminology.  She says “checking vs. exploring” instead. 

I love the simplicity.  I imagine when used in public, most people can follow; “exploring” is a testing activity that can only be performed by humans, “checking” is a testing activity that is best performed by machines.  And the beauty of said terms is…they’re both testing!!!  Yes, automation engineers, all the cool stuff you build can still be called testing.

The thing I’ve always found awkward about the Michael Bolton/James Bach  “checking vs. testing” terminology, is accepting that tests or testing can NOT be automated.  Hendrickson’s version seems void of said awkwardness.  She just says, “exploring” can NOT be automated…well sure, much easier to swallow.

The problem, I thought, was James and Michael’s testing definition was too narrow. Surely it could be expanded to include machine checks as testing.  Thus, I set out to find common “Testing” definitions that would support my theory.  And much to my surprise, I could not.  All the definitions (e.g., Merriam-Webster) I read, described testing as an open-ended investigation…in other words, something that can NOT be automated.

Finally, I have to admit, Hendrickson’s term, “exploring” can be ambiguous.  It might get confused with Exploratory Testing, which is a specific structured approach, as opposed to Ad Hoc testing, which is unstructured.  Hmmm…Elisabeth, if you’re out there, I’m happy to listen to your definitions, perhaps you will change my mind.

So it seems, just when I thought I could finally wiggle away from their painful terminology, I am now squarely back in the James and Michael camp when it comes to “checking vs. testing”.

…Dang!

Per Elisabeth Hendrickson, I’m one of the 80% of test managers looking for testers with programming skills.  And as I sift through tester resumes, attempting to fill two technical positions, I see a problem; testers with programming skills are few and far between!

About 90% of the resumes I’ve seen lately are for testers specialized in manual (sapient) testing of web-based products.  And since most of these resumes are sprinkled with statements like “knowledge of QTP”, I assume most of these testers are doing all their testing via the UI.

And then it hit me…

Maybe the reason so many testers are specialized in manual testing via the UI is because there are so many UI bugs!

This is no scientific analysis by any means.  Just a quick thought about the natural order of things.  But here’s my attempt to answer the question of why there aren’t more testers with programming skills out there.

It may be because they’re too busy finding bugs in the UI layer of their products.

  1. Spend time reporting problems that already exist in production, that users have not asked to fix.
  2. Demand all your bugs get fixed, despite the priorities of others.
  3. Keep your test results to yourself until you’re finished testing.
  4. Never consider using test tools.
  5. Attempt to conduct all testing yourself, without asking non-testers for help.
  6. Spend increasingly more time on regression tests each sprint.
  7. Don’t clean up your test environments.
  8. Keep testing the same way you’ve always tested.  Don’t improve your skills.
  9. If you need more time to test it, ask to have it pulled from the sprint, you can test it during the next sprint.
  10. Don’t start testing until your programmer tells you “okay, it’s ready for testing”.

If you made two lists for a given software feature (or user story):

  1. all the plausible user scenarios you could think of
  2. all the implausible user scenarios you could think of

…which list would be longer?

I’m going to say the latter.  The user launches the product, holds down all the keys on the keyboard for four months, removes all the fonts from their OS, then attempts to save a value at the exact same time as one million other users.  One can determine implausible user scenarios without obtaining domain knowledge.

Plausible scenarios should be easier to predict, by definition.  It may be that only one out of 100 users stray from the “happy path”, in which case our product may have just experienced an implausible scenario.

What does this have to do with testing?  As time becomes dearer, I continue to refine my test approach.  It seems to me, the best tests to start with are still confirmatory (some call these “happy path”) tests.  There are fewer of them, which makes it more natural to know when to start executing the tests for the scenarios less likely to occur.

image

The chart above is my attempt to illustrate the test approach model I have in my head.  The Y axis is how plausible the test is (e.g., it is 100% likely that users will do this, it is 50% likely that users will do this).  The X axis represents the test order (e.g., 1st test executed, 2nd test executed, etc.).  The number of tests executed is relative.

Basically, I start with the most plausible tests, then shift my focus to the stuff that will rarely happen.  These rare scenarios at the bottom of the chart above can continue forever as you move toward 0% plausibility, so I generally use the “Times Up” stopping heuristic.  One can better tackle testing challenges with this model if one makes an effort to determine how users normally use the product.

I often hear contradictory voices in my head saying, “don’t start with confirmatory tests, the bugs are off the beaten path”.  Okay, but are they really?  If our definition of a bug is “something that bugs someone who matters”, then the problems I find on the bottom of the above chart’s line, may matter less than those found on the top.  Someone who matters, may not venture to the bottom. 

For more on my thoughts (and contrary thoughts) on this position see We Test To Find Out If Software *Can* Work.

As soon as you hear about a production bug in your product, the first thing you may want to do, is volunteer to log it.

Why?

  • Multiple people may attempt to log the bug, which wastes time.  Declare your offer to log it.
  • You’re a tester.  You can write a better bug report than others.
  • It shows a willingness to jump in and assist as early as possible.
  • It assigns the new bug an identifier, which aides conversation (e.g., “We think Bug1029 was created by the fix for Bug1028”). 
  • Now the team has a place to document and gather information to.
  • Now you are intimately involved in the bug report.  You should be able to grok the bug.

Shouldn’t I wait until I determine firm repro steps?

  • No. Bug reports can be useful without repro steps.  The benefits, above, do not depend on repro steps.
  • No.  If you need time to determine repro steps, just declare that in the bug report’s description (e.g., “repro steps not yet known, investigation under way”) and add them later.

But what if the programmer, who noticed the bug, understands it better than me?  Wouldn’t they be in a better position to log the bug?

  • Maybe.  But you’re going to have to understand it sooner or later.  How else can you test it?
  • Wouldn’t you rather have your programmer’s time be spent fixing the code instead of writing a bug report?

When I turned 13 years old, my Dad said, “What do you want to be when you grow up?”.  I already knew the answer.  “A software tester” I said!

Yeah, right. 

In fact, even in college I wasn’t sure what I wanted to be.  I had enrolled in a new major called “Communication System Management” and was studying to be the guy responsible for company telephone and computer networks.  However, my internship put me to sleep.  All analytics and no people got boring fast.  The job interviews during my senior year were just as boring, despite getting flown around the country on several occasions.

So when a buddy of mine found me a job teaching software, which I had done part-time at Ohio University’s computer lab, I packed my stereo and clothes into my ‘85 Jetta and headed south, from Ohio to Atlanta.  It was good money back then. People were getting personal computers one their desks and they needed to learn how to use things like…email.  I went on to teach VBScript and AutoCAD and eventually taught proprietary telephone-office-update software for Lucent Technologies. 

As the new versions of the Lucent software rolled out, I trained the users, which put me in a unique position.  I could see first hand, which features the users liked and which they hated.  I was among the first to observe the software performance under load and capture the concurrency issues that occurred. 

This was in the late 90’s.  The programmers were doing the “testing” themselves.  But they realized I was getting good at providing feedback before they put their software in front of the users.  To better integrate me into the development team, the programmers asked me to write a piece of working software.  I wrote the team’s personal-time-off (vacation request) software in classic ASP and was officially accepted as part of the development team.  My main responsibility…was quality.

Thus, a software tester was born.  And I’ve been loving it ever since.

How did you become a tester?  What’s your story?

ATDD Sans Automated Tests.  Why Not?

Every time I hear about Acceptance Test Driven Development (ATDD), it’s always implied that the acceptance tests are automated.  What’s up with that?  After all, it’s not called Automated Acceptance Test Driven Development (AATDD).  It seems to me, that ATDD without automated tests, might be a better option for some teams.

This didn’t occur to me until I had the following conversation with the Agile consultant leading the discussion at the ATDD 2013 STAReast discussion lunch table.  After a discussion about ATDD tools and several other dependencies to automating acceptance tests, our conversation went something like this:

Agile Consultant: ATDD has several advantages.  While writing the acceptance tests as a team, we better understand the Story.  Then, we’ll run the tests before the product code exists and we’ll expect the tests to fail.  Then we’ll write the product code to make the tests pass.  And one of the main advantages of ATDD is, once the automated acceptance tests pass, the team knows they are “done” with that Story.

Me: Sounds challenging.  Are you saying there must be an automated check written first, for every piece of product code we need?

Agile Consultant:  Pretty much.

Me: Doesn’t that restrict the complexity and creativity of our product?  I mean, what if we come up with something not feasible to test via a machine.  Besides, aren’t there normally some tests better executed by humans, even for simple products?

Agile Consultant:  Yes, of course.  I guess some manual tests could be required, along with automated tests, as part of your “done” definition.

Me: What if all our tests are better executed by a human because our product doesn’t lend itself to automation?  Can we still claim to do ATDD and enjoy its benefits?

Agile Consultant:  …um…I guess so…  (displaying a somewhat disappointed face, as if something does not compute, but maybe she was just thinking I was an annoying nutcase)

And that got me thinking: And doesn’t this save us a lot of headaches, pain, and time because we wouldn’t have to distill our requirements into rigid test scripts with specific data (AKA “automatic checks”)?  We wouldn’t have to ask our programmers to write extra test code hooks for us.  We wouldn’t have to maintain a bunch of machine tests that don’t adapt as quickly as our human brains do?

 

Let’s call this Human Acceptance Test Driven Development (HATDD).  I’ve stated some of the advantages above.  The only significant disadvantage that stands out is that you don’t get a bunch of automated regression checks.  But it seems to me, ATDD is more about new Feature testing than it is about regression testing anyway.

So why aren’t there more (or any) Agile consultants running around offering HATDD?



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.