I would much rather test than create test documents.  Adam White once told me, “If you’re not operating the product, you’re not testing”.

It’s soooooo easy to skip all documentation and dive right in to the testing.  It normally results in productive testing and nobody misses the documents.  Until…three years later, when the prog makes a little change to a module that hasn’t been tested since.  The team says the change is high risk and asks you which tests you executed three year ago and how long it took.

Fair questions.  I think we, as testers, should be able to answer.  Even the most minimal test documentation (e.g., test fragments written in notepad) should be able to answer those questions. 

If we can’t answer relatively quickly, we may want to consider recording better test documentation.

Warning: This is mostly a narcissistic post that will add little value to the testing community.

I’ve been pretty depressed about my proposal not getting picked for Let’s Test 2014.  Each of my proposals have been picked for STPCon and STAR over the past three years; I guess I was getting cocky.  I put all my eggs in one basket and only proposed to Let’s Test.  My wife and I were planning to make a vacation out of it…our first trip to Scandinavia together.

Despite my rejection, my VP graciously offered to send me as an attendant but I wallowed in my own self pity and turned her down.  In fact, I decided not to attend any test conferences in 2014.  Pretty bitter, huh?

I know I could have pulled off a kick-ass talk with the fairly original and edgy topic I submitted.  I dropped names.  I got referrals from the right people.  My topic fit the conference theme perfectly, IMO.  So why didn’t I make the cut?

The Let’s Test program chairs have not responded to my request for “what I could have done differently to get picked”.  Lee Copeland, the STAR program chair was always helpful in that respect.  But I don’t blame the Let’s Test program chairs.  Apparently program chairs have an exhausting job and they get requests for feedback from hundreds of rejected speakers.

Fortunately, my mentor and friend, Michael Bolton read my proposal and gave me some good honest feedback on why I didn’t get picked.  He summarized his feedback into three points which I’ll paraphrase:

  1. A successful pitch to Let’s Test involves positioning your talk right in the strike zone of an experience report.  You seemed to leave out the teensy, weensy little detail that you’re an N-year test manager at Turner, and that you’re telling a story about that here.
  2. Apropos of that, tell us about the story that you’re going to tell.  You’ve got a bunch of points listed out, but they seem disjointed and the through line isn’t clear to me.  For example, what does the second point have to do with the first?  The fourth with the third?
  3. Drop the dopey idea of “learning objectives”, which is far less important at Let’s Test than it may be at other conferences.

Bolton also directed me to his tips on writing a killer conference proposal, which make my How To Speak At a Testing Conference look amateur at best.

So there it is.  One of my big testing-related failure stories.  Wish me luck next year when it give it another go, for Let’s Test 2015…man that seems a long ways off.

Here’s another failure story, per the post where I complained about people not telling enough test failure stories.

Years ago, after learning about Keyword-Driven Automation, I wrote an automation framework called OKRA (Object Keyword-Driven Repository for Automation).  @Wiggly came up with the name.  Each automated check was written as a separate Excel worksheet, using dynamic dropdowns to select from available Action and Object keywords in Excel.  The driver was written in VBScript via QTP.  It worked, for a little while, however:

  • One Automator (me) could not keep up with 16 programmers.  The checks quickly became too old to matter.  FAIL!
  • An Automator with little formal programming training, writing half-ass code with VBScript, could not get help from a team of C# focused programmers.  FAIL!
  • The product under test was a .Net Winforms app full of important drag-n-drop functionality, sitting on top of constantly changing, time-sensitive, data.  Testability was never considered.  FAIL!
  • OKRA was completely UI-based automation.  FAIL!

Later, a product programmer took an interest in developing his own automation framework. It would allow manual testers to write automated checks by building visual workflows.  This was a Microsoft technology called MS Workflow or something like that.  The programmer worked in his spare time over the course of about a year.  It eventually faded into oblivion and was never introduced to testers.  FAIL!

Finally, I hired a real automator, with solid programming skills, and attempted to give it another try.  This time we picked Microsoft’s recently launched CodedUI framework and wrote the tests in C# so the product programmers could collaborate.  I stood in front of my SVP and project team and declared,

“This automation will shave 2 days off our regression test effort each iteration!”


  • The automator was often responsible for writing automated checks for a product they barely understood.  FAIL!
  • Despite the fact that CodedUI was marketed by Microsoft as being the best automation framework for .Net Winform apps, it failed to quickly identify most UI objects, especially for 3rd party controls.
  • Although, at first, I pushed for significant amounts of automation below the presentation layer, the automator focused more energy on UI automation.  I eventually gave in too.  The tests were slow at best and human testers could not afford to wait.  FAIL!  Note: this was not the automators failure, it was my poor direction.

At this point, I’ve given up all efforts to automate this beast of an application.

Can you relate?


Have you ever been to a restaurant with a kitchen window?  Well, sometimes it may be best not to show the customers what the chicken looks like until it is served.

A tester on my team has something similar to a kitchen window for his automated checks; the results are available to the project team.

Here’s the rub: 

His new automated check scenario batches are likely to result in…say, a 10% failure rate (e.g., 17 failed checks).  These failures are typically bugs in the automated checks, not the product under test.  Note: this project only has one environment at this point.

When a good curious product owner looks through the kitchen window and sees 17 Failures, it can be scary!  Are these product bugs?  Are these temporary failures? 

Here’s how we solved this little problem:

  • Most of the time, we close the curtains.  The tester writes new automated checks in a sandbox, debugs them, then merges them to a public list.
  • When the curtains are open, we are careful to explain, “this chicken is not yet ready to eat”.  We added an “Ignore” attribute to the checks so they can be filtered from sight.
photo credit: JBrazito via photopin cc

BDD/ATDD is all the rage these days.  The cynic in me took a cheap shot at it here.  But the optimist in me really REALLY thinks it sounds cool.  So I set off to try it….and failed twice.

First Fail:

I’m not involved in many greenfield projects so I attempted to convince my fellow colleagues to try BDD with their greenfield project.  I started with the usual emails, chock full of persuasive BDD links to videos and white papers.  Weeks went by with no response.  Next, we scheduled a meeting so I could pitch the idea to said project team.  To prepare, I read Markus Gartner’s “ATDD By Example” book, took my tester buddy, Alex Kell, out to lunch for an ATDD Q & A, and read a bunch of blog posts.

I opened my big meeting by saying, “You guys have an opportunity to do something extraordinary, something that has not been done in this company.  You can be leaders.”  (It played out nicely in my head before hand) I asked the project team to try BDD, I proposed it as a 4 to 6 month pilot, attempted to explain the value it would bring to the team, and suggested roles and responsibilities to start with. 

Throughout the meeting I encountered reserved reluctance.  At its low point, the discussion morphed into whether or not the team wanted to bother writing any unit tests (regardless of BDD).  At its high point, the team agreed to do their own research and try BDD on their prototype product.  The team’s tester walked away with my “ATDD By Example” book and I walked away with my fingers crossed.

Weeks later, I was matter-of-factly told by someone loosely connected to said project team, “Oh, they decided not to try BDD because the team is too new and the project is too important”.  It’s that second part that always makes me shake my head.

Second Fail:

By golly I’m going to try it myself!

One of my project teams just started a small web-based spin-off product, a feedback form.  I don’t normally have the luxury of testing web products and it seemed simple enough so I set out to try BDD on my own.  I choose SpecFlow and spent several hours setting up all the extensions and NuGet packages I needed for BDD.  I got the sample Gherkin test written and executing and then my test manager job took over, flinging me all kinds of higher priority work.  Three weeks later, the feedback form product is approaching code complete and I realize it just passed me by.


…are not always the full truth.  Is that hurting our craft? 

Last week, I attended the first Software Testing Club Atlanta Meetup.  It was organized by Claire Moss and graciously hosted by VersionOne.  The format was Lean Coffee, which was perfect for this meeting.


Photo by Claire Moss

I’m not going to blog about the discussion topics themselves.  Instead, I would like to blog about a familiar Testing Story pattern I noticed:

During the first 2 hours, it seemed to me, we were telling each other the testing stories we wanted to believe, the stories we wanted each other to believe.  We had to make first impressions and establish our personal expertise, I guess.  But during the 3rd hour, we started to tell more candid stories, about our testing struggles and dysfunctions.  I started hearing things like, “we know what we should be doing, we just can’t pull it off”.  People who, at first impression, seemed to have it all together, seemed a little less intimidating now.

When we attend conference talks, read blog posts, and socialize professionally, I think we are in a bubble of exaggerated success.  The same thing happens on Facebook, right?  And people fall into a trap: The more one uses Facebook, the more miserable one feels.  I’m probably guilty of spreading exaggerated success on this blog.  I’m sure it’s easier, certainly safer, to leave out the embarrassing bits.

That being said, I am going to post some of my recent testing failure stories on this blog in the near future.  See you soon.

My data warehouse project team is configuring one of our QA environments to be a dynamic read-only copy of production.  I’m salivating as I try to wrap my head around the testing possibilities.

We are taking about 10 transactional databases from one of our QA environments, and replacing them with 10 databases replicated from their production counterparts.  This means, when any of our users perform a transaction in production, said data change will be reflected in our QA environment instantly.

Expected Advantages:

  • Excellent Soak Testing – We’ll be able to deploy a pre-production build of our product to our Prod-replicated-QA-environment and see how it handles actual production data updates.  This is huge because we have been unable to find some bugs until our product builds experience real live usage.
  • Use real live user scenarios to drive tests – We have a suite of automated checks that invoke fake updates in our transactional data bases, then expect data warehouse updates within certain time spans.  The checks use fake updates.  Until now.  With the Prod-replicated-QA-environment, we are attempting to programmatically detect real live data updates via logging, and measure those against expected results.
  • Comparing reports – A new flavor of automated checks is now possible.  With the Prod-replicated-QA-environment, we are attempting to use production report results as a golden master to compare to QA report results sitting on the pre-production QA build data warehouse.  Since the data warehouse data to support the reports should be the same, we can expect the report results to match.

Expected Challenges:

  • The Prod-replicated-QA-environment will be read-only.  This means instead of creating fake user actions whenever we want, we will need to wait until they occur.  What if some don’t occur…within the soak test window?
  • No more data comparing? - Comparing transactional data to data warehouse data has always been a bread and butter automated check we’ve performed.  These checks check data integrity and data loading.  Comparing a real live quickly changing source to a slowly updating target will be difficult at best.

I can’t imagine testing without multiple computers at my disposal.  You may want to hold on to your old, out of warranty computers if given the choice.  Five quick reasons:

  • When Computer#1 hits an impediment such as an unrecoverable error, Computer#2 can start testing immediately as Computer#1 reboots.
  • I can use both computers to simulate interesting multi-user tests.  Example: what if two users attempt to acquire locks at roughly the same time.
  • I can kick off timely processes, staggered on 3 separate boxes, so as not to sit idle waiting.
  • Different OS’s, browser versions, frameworks, and other software running in the background can be informally tested to narrow down variables.
  • Computer#1 can support the administrative work of testing (e.g., documenting tests, bugs, emailing), while Computer#2 can stay clean and focus on operating the product under test. 

What is the relationship between these two objects?



How about these two?


This, I’m afraid, is how testers (myself included) often see software modules…like black boxes. Their relationships are hidden from us.  We know the programmer just changed something related to the seeds inside the orange, so we ask ourselves, “How could changing the seeds inside the orange affect the toaster?”.  Hmmmm.  “Well, it sure seems like it couldn’t”.  Then, after a deployment, we’re shocked to discover the toaster starts burning all the toast.

Why didn’t the programmer warn us?  Well, just because the programmer understands the innards of the orange, doesn’t mean they understand the innards of the toaster.  In fact, based on my experiences, if there is enough work to do on the orange, the orange programmer will happily take it over learning to code for the toaster.

So here we are, left with nothing better to do than regression test the toaster, the jointer, and the flash light, every time the orange changes.  No wonder we spend so much time regression testing.

In conclusion, maybe the more we learn about the architecture behind our software system, the fewer regression tests we’ll need to execute.  Instead, we’ll better focus our testing and make the invisible relationships visible for the rest of the team…before development even begins.

I’m trying to fill two technical tester positions.  It’s exhausting.  All the resumes are starting to look the same.  They all tout:

  • Extensive knowledge of the SDLCWho cares?  I’ve been testing for 15 years and I’ve never encountered a situation where extensive knowledge of the SDLC has come in handy…I’m not even sure what it is.  Who is motivating this?  Are there that many Test Managers out there saying, “what we really need, is a tester who knows the SDLC”?
  • Understanding of test automation tools like QTP?  “Understanding of”?
  • Ability to map test cases to requirements in Quality Center.  It’s been 6 years since I’ve seen Quality Center but I don’t recall it being that difficult a task.
  • Performed different types of tests like Functional, Regression, Smoke, UAT, White Box, Black Box, Grey Box, and End-to-end.  Darn, I was really looking for someone who could write “integration tests”.  Oh well.

One resume said:

  • Extensive QA experience via hands-on testing?  Is there a way to gain experience without being hands-on?

Another said:

  • Fixed tested bugs and coordinated with developers in release of bug fixes during every Sprint Run.  Hmmm.  Perhaps you should start by testing your resume verbiage.

During an interview I asked:

Me: According to your resume, you worked on a team using Test Driven Development.  Was it effective?
Candidate:  Oh yes.  At the end of Sprints, if the developers had time, they would write some unit tests for Stories we were about to release to production.

During another, I asked the following easy question:

Me: What are some attributes of a good bug report?
Candidate: Documenting the bug is the most important attribute.

Finally, after an interview with me, for a programming position, the candidate remarked, “That’s odd, I’ve never seen a man in a QA role.”.  It reminds me of a little post I made years ago that almost lost me some friends.


BTW - If you live in the Atlanta area, have excellent DB and SQL skills, and are capable of testing something without a UI, please drop me a note.  I may have an awesome job waiting for you.

After watching Elisabeth Hendrickson’s CAST 2012 Keynote (I think), I briefly fell in love with her version of the “checking vs. testing” terminology.  She says “checking vs. exploring” instead. 

I love the simplicity.  I imagine when used in public, most people can follow; “exploring” is a testing activity that can only be performed by humans, “checking” is a testing activity that is best performed by machines.  And the beauty of said terms is…they’re both testing!!!  Yes, automation engineers, all the cool stuff you build can still be called testing.

The thing I’ve always found awkward about the Michael Bolton/James Bach  “checking vs. testing” terminology, is accepting that tests or testing can NOT be automated.  Hendrickson’s version seems void of said awkwardness.  She just says, “exploring” can NOT be automated…well sure, much easier to swallow.

The problem, I thought, was James and Michael’s testing definition was too narrow. Surely it could be expanded to include machine checks as testing.  Thus, I set out to find common “Testing” definitions that would support my theory.  And much to my surprise, I could not.  All the definitions (e.g., Merriam-Webster) I read, described testing as an open-ended investigation…in other words, something that can NOT be automated.

Finally, I have to admit, Hendrickson’s term, “exploring” can be ambiguous.  It might get confused with Exploratory Testing, which is a specific structured approach, as opposed to Ad Hoc testing, which is unstructured.  Hmmm…Elisabeth, if you’re out there, I’m happy to listen to your definitions, perhaps you will change my mind.

So it seems, just when I thought I could finally wiggle away from their painful terminology, I am now squarely back in the James and Michael camp when it comes to “checking vs. testing”.


Per Elisabeth Hendrickson, I’m one of the 80% of test managers looking for testers with programming skills.  And as I sift through tester resumes, attempting to fill two technical positions, I see a problem; testers with programming skills are few and far between!

About 90% of the resumes I’ve seen lately are for testers specialized in manual (sapient) testing of web-based products.  And since most of these resumes are sprinkled with statements like “knowledge of QTP”, I assume most of these testers are doing all their testing via the UI.

And then it hit me…

Maybe the reason so many testers are specialized in manual testing via the UI is because there are so many UI bugs!

This is no scientific analysis by any means.  Just a quick thought about the natural order of things.  But here’s my attempt to answer the question of why there aren’t more testers with programming skills out there.

It may be because they’re too busy finding bugs in the UI layer of their products.

  1. Spend time reporting problems that already exist in production, that users have not asked to fix.
  2. Demand all your bugs get fixed, despite the priorities of others.
  3. Keep your test results to yourself until you’re finished testing.
  4. Never consider using test tools.
  5. Attempt to conduct all testing yourself, without asking non-testers for help.
  6. Spend increasingly more time on regression tests each sprint.
  7. Don’t clean up your test environments.
  8. Keep testing the same way you’ve always tested.  Don’t improve your skills.
  9. If you need more time to test it, ask to have it pulled from the sprint, you can test it during the next sprint.
  10. Don’t start testing until your programmer tells you “okay, it’s ready for testing”.

If you made two lists for a given software feature (or user story):

  1. all the plausible user scenarios you could think of
  2. all the implausible user scenarios you could think of

…which list would be longer?

I’m going to say the latter.  The user launches the product, holds down all the keys on the keyboard for four months, removes all the fonts from their OS, then attempts to save a value at the exact same time as one million other users.  One can determine implausible user scenarios without obtaining domain knowledge.

Plausible scenarios should be easier to predict, by definition.  It may be that only one out of 100 users stray from the “happy path”, in which case our product may have just experienced an implausible scenario.

What does this have to do with testing?  As time becomes dearer, I continue to refine my test approach.  It seems to me, the best tests to start with are still confirmatory (some call these “happy path”) tests.  There are fewer of them, which makes it more natural to know when to start executing the tests for the scenarios less likely to occur.


The chart above is my attempt to illustrate the test approach model I have in my head.  The Y axis is how plausible the test is (e.g., it is 100% likely that users will do this, it is 50% likely that users will do this).  The X axis represents the test order (e.g., 1st test executed, 2nd test executed, etc.).  The number of tests executed is relative.

Basically, I start with the most plausible tests, then shift my focus to the stuff that will rarely happen.  These rare scenarios at the bottom of the chart above can continue forever as you move toward 0% plausibility, so I generally use the “Times Up” stopping heuristic.  One can better tackle testing challenges with this model if one makes an effort to determine how users normally use the product.

I often hear contradictory voices in my head saying, “don’t start with confirmatory tests, the bugs are off the beaten path”.  Okay, but are they really?  If our definition of a bug is “something that bugs someone who matters”, then the problems I find on the bottom of the above chart’s line, may matter less than those found on the top.  Someone who matters, may not venture to the bottom. 

For more on my thoughts (and contrary thoughts) on this position see We Test To Find Out If Software *Can* Work.

As soon as you hear about a production bug in your product, the first thing you may want to do, is volunteer to log it.


  • Multiple people may attempt to log the bug, which wastes time.  Declare your offer to log it.
  • You’re a tester.  You can write a better bug report than others.
  • It shows a willingness to jump in and assist as early as possible.
  • It assigns the new bug an identifier, which aides conversation (e.g., “We think Bug1029 was created by the fix for Bug1028”). 
  • Now the team has a place to document and gather information to.
  • Now you are intimately involved in the bug report.  You should be able to grok the bug.

Shouldn’t I wait until I determine firm repro steps?

  • No. Bug reports can be useful without repro steps.  The benefits, above, do not depend on repro steps.
  • No.  If you need time to determine repro steps, just declare that in the bug report’s description (e.g., “repro steps not yet known, investigation under way”) and add them later.

But what if the programmer, who noticed the bug, understands it better than me?  Wouldn’t they be in a better position to log the bug?

  • Maybe.  But you’re going to have to understand it sooner or later.  How else can you test it?
  • Wouldn’t you rather have your programmer’s time be spent fixing the code instead of writing a bug report?

When I turned 13 years old, my Dad said, “What do you want to be when you grow up?”.  I already knew the answer.  “A software tester” I said!

Yeah, right. 

In fact, even in college I wasn’t sure what I wanted to be.  I had enrolled in a new major called “Communication System Management” and was studying to be the guy responsible for company telephone and computer networks.  However, my internship put me to sleep.  All analytics and no people got boring fast.  The job interviews during my senior year were just as boring, despite getting flown around the country on several occasions.

So when a buddy of mine found me a job teaching software, which I had done part-time at Ohio University’s computer lab, I packed my stereo and clothes into my ‘85 Jetta and headed south, from Ohio to Atlanta.  It was good money back then. People were getting personal computers one their desks and they needed to learn how to use things like…email.  I went on to teach VBScript and AutoCAD and eventually taught proprietary telephone-office-update software for Lucent Technologies. 

As the new versions of the Lucent software rolled out, I trained the users, which put me in a unique position.  I could see first hand, which features the users liked and which they hated.  I was among the first to observe the software performance under load and capture the concurrency issues that occurred. 

This was in the late 90’s.  The programmers were doing the “testing” themselves.  But they realized I was getting good at providing feedback before they put their software in front of the users.  To better integrate me into the development team, the programmers asked me to write a piece of working software.  I wrote the team’s personal-time-off (vacation request) software in classic ASP and was officially accepted as part of the development team.  My main responsibility…was quality.

Thus, a software tester was born.  And I’ve been loving it ever since.

How did you become a tester?  What’s your story?

ATDD Sans Automated Tests.  Why Not?

Every time I hear about Acceptance Test Driven Development (ATDD), it’s always implied that the acceptance tests are automated.  What’s up with that?  After all, it’s not called Automated Acceptance Test Driven Development (AATDD).  It seems to me, that ATDD without automated tests, might be a better option for some teams.

This didn’t occur to me until I had the following conversation with the Agile consultant leading the discussion at the ATDD 2013 STAReast discussion lunch table.  After a discussion about ATDD tools and several other dependencies to automating acceptance tests, our conversation went something like this:

Agile Consultant: ATDD has several advantages.  While writing the acceptance tests as a team, we better understand the Story.  Then, we’ll run the tests before the product code exists and we’ll expect the tests to fail.  Then we’ll write the product code to make the tests pass.  And one of the main advantages of ATDD is, once the automated acceptance tests pass, the team knows they are “done” with that Story.

Me: Sounds challenging.  Are you saying there must be an automated check written first, for every piece of product code we need?

Agile Consultant:  Pretty much.

Me: Doesn’t that restrict the complexity and creativity of our product?  I mean, what if we come up with something not feasible to test via a machine.  Besides, aren’t there normally some tests better executed by humans, even for simple products?

Agile Consultant:  Yes, of course.  I guess some manual tests could be required, along with automated tests, as part of your “done” definition.

Me: What if all our tests are better executed by a human because our product doesn’t lend itself to automation?  Can we still claim to do ATDD and enjoy its benefits?

Agile Consultant:  …um…I guess so…  (displaying a somewhat disappointed face, as if something does not compute, but maybe she was just thinking I was an annoying nutcase)

And that got me thinking: And doesn’t this save us a lot of headaches, pain, and time because we wouldn’t have to distill our requirements into rigid test scripts with specific data (AKA “automatic checks”)?  We wouldn’t have to ask our programmers to write extra test code hooks for us.  We wouldn’t have to maintain a bunch of machine tests that don’t adapt as quickly as our human brains do?


Let’s call this Human Acceptance Test Driven Development (HATDD).  I’ve stated some of the advantages above.  The only significant disadvantage that stands out is that you don’t get a bunch of automated regression checks.  But it seems to me, ATDD is more about new Feature testing than it is about regression testing anyway.

So why aren’t there more (or any) Agile consultants running around offering HATDD?

Well…yes.  I would.

The most prolific bug finder on my team is struggling with this question.  The less the team decides to fix her bugs, the less interested she grows in reporting them.  Can you relate?

There is little satisfaction reporting bugs that nobody wants to hear about or fix.  In fact, it can be quite frustrating.  Nevertheless, when our stakeholders choose not to fix certain classes of bugs, they are sending us a message about what is important to them right now.  And as my friend and mentor, Michael Bolton likes to say:

If they decide not to fix my bug, it means one of two things:

  • Either I’m not explaining the bug well enough for them to understand its impact,
  • or it’s not important enough for them to fix.

So as long as you’re practicing good bug advocacy, it must be the second bullet above.  And IMO, the customer is always right.

Nevertheless, we are testers.  It is our job to report bugs despite adversity.  If we report 10 for every 1 that gets fixed, so be it.  We should not take this personally.  However, we may want to:

  • Adjust our testing as we learn more about what our stakeholders really care about.
  • Determine a non-traditional method of informing our team/stakeholders our bugs.
    • Individual bug reports are expense because they slowly suck everyone’s time as they flow through or sit in the bug repository.  We wouldn’t want to knowingly start filling our bug report repository with bugs that won’t be fixed.
    • One approach would be a verbal debrief with the team/stakeholders after testing sessions.  Your testing notes should have enough information to explain the bugs.
    • Another approach could be a “supper bug report”; one bug report that lists several bugs.  Any deemed important can get fixed or spun off into separate bug reports if you like.

It’s a cliché, I know.  But it really gave me pause when I heard Jeff “Cheezy” Morgan say it during his excellent STAReast track session, “Android Mobile Testing: Right Before Your Eyes”.  He said something like, “instead of looking for bugs, why not focus on preventing them?”.                 

Cheezy demonstrated Acceptance Test Driven Development (ATDD) by giving a live demo, writing Ruby tests via Cucumber, for product code that didn’t exist.  The tests failed until David Shah, Cheezy’s programmer, wrote the product code to make them pass. 

(Actually, the tests never passed, which they later blamed on incompatible Ruby versions…ouch.  But I’ll give these two guys the benefit of the doubt. )

Now back to my blog post title.  I find this mindshift appealing for several reasons, some of which Cheezy pointed out and some of which he did not:

  • Per Cheezy’s rough estimate 8/10 bugs involve the UI.  There is tremendous benefit to the programmer knowing about these UI bugs while the programmer is writing the UI initially.  Thus, why not have our testers begin performing exploratory testing before the Story is code complete?
  • Programmers are often incentivized to get something code completed so the testers can have it (and so the programmers can work on the next thing).  What if we could convince programmers it’s not code complete until it’s tested?
  • Maybe the best time to review a Story is when the team is actually about to start working on it; not at the beginning of a Sprint.  And what do we mean when we say the team is actually about to start working on it?
    • First we (Tester, Programmer, Business Analyst) write a bunch of acceptance tests.
    • Then, we start writing code as we start executing those tests.
    • Yes, this is ATDD, but I don’t think automation is as important as the consultants say.  More on that in a future post.
  • Logging bugs is soooooo time consuming and can lead to dysfunction.  The bug reports have to be managed and routed appropriately.  People can’t help but count them and use them as measurements for something…success or failure.  If we are doing bug prevention, we never need to create bug reports.

Okay, I’m starting to bore myself, so I’ll stop.  Next time I want to explore Manual ATDD.

  • Measuring your Automation might be easy.  Using those measurements is not.  Examples:
    • # of times a test ran
    • how long tests take to run
    • how much human effort was involved to execute and analyze results
    • how much human effort was involved to automate the test
    • number of automated tests
  • EMTE (Equivalent Manual Test Effort) – What effort it would have taken humans to manually execute the same test being executed by a machine.  Example: If it would take a human 2 hours, the EMTE is 2 hours.
    • How can this measure be useful? It is an easy way to show management the benefits of automation (in a way managers can easily understand).
    • How can this measure be abused?  If we inflate EMTE by re-running automated tests just for the sake of increasing EMTE, when are misleading.  Sure, we can run our automated tests everyday, but unless the build is changing every day, we are not adding much value.
    • How else can this measure be abused?  If you hide the fact that humans are capable of noticing and capturing much more than machines.
    • How else can this measure be abused?  If your automated tests can not be executed by humans and if your human tests can not be executed by a machine.
  • ROI (Return On Investment) – Dorothy asked the students what ROI they had achieved with the automation they created.  All 6 students who answered, got it wrong; they explained various benefits of their automation, but none were expressed as ROI.  ROI should be a number, hopefully a positive number. 
    • ROI=(benefit-cost)/cost
    • The trick is to convert tester time effort to money.
    • ROI does not measure things like “faster execution”, “quicker time to market”, “test coverage”
    • How can this measure be useful?  Managers may think there is no benefit to automation until you tell them there is.  ROI may be the only measure they want to hear.
    • How is this measure not useful?  ROI may not be important.  It may not measure your success.  “Automation is an enabler for success, not a cost reduction tool” – Yoram Mizrachi.  You company probably hires lawyers without calculating their ROI.
  • She did the usual tour of poor-to-better automation approaches (e.g., capture playback to advanced key-word driven framework).  I’m bored by this so I have a gap in my notes.
  • Testware architecture – consider separating your automation code from your tool, so you are not tied to the tool.
  • Use pre and post processing to automate test setup, not just the tests.  Everything should be automated except selecting which tests to run and analyzing the results.
  • If you expect a test to fail, use the execution status “Expected Fail”, not “Fail”.
  • Comparisons (i.e., asserts, verifications) can be “specific” or “sensitive”.
    • Specific Comparison – an automated test only checks one thing.
    • Sensitive Comparison – an automated test checks several things.
    • I wrote “awesome” in my notes next to this: If your sensitive comparisons overlap, 4 tests might fail instead of 3 passing and 1 failing.  IMO, this is one of the most interesting decisions an automator must make.  I think it really separates the amateurs from the experts.  Nicely explained, Dorothy!

If you want to have test automation
And don't care about trials and tribulation
Just believe all the hype
Get a tool of each type
But be warned, you'll have serious frustration!

(a limerick by Dorothy Graham)

I attended Dorothy Graham’s STARCanada tutorial, “Managing Successful Test Automation”.  Here are some highlights from my notes:

  • “Test execution automation” was the tutorial concern. I like this clarification; sets it apart from “exploratory test automation” or “computer assisted exploratory testing”).
  • Only 19% of people using automation tools (In Australia) are getting “good benefits”…yikes.
  • Testing and Automating should be two different tasks, performed by different people.
    • A common problem with testers who try to be automators:  Should I automate or just manually test?  Deadline pressures make people push automation into the future.
    • Automators – People with programming skills responsible for automating tests.  The automated tests should be able to be executed by non-technical people.
    • Testers – People responsible for writing tests, deciding which tests to automate, and executing automated tests.  “Some testers would rather break things than make things”.
    • Dorothy mentioned “checking” but did not use the term herself during the tutorial.
    • Automation should be like a butler for the testers.  It should take care of the tedious and monotonous, so the testers can do what they do best.
  • A “pilot” is a great way to get started with automation.
    • Calling something a “pilot” forces reflection.
    • Set easily achievable automation goals and reflect after 3 months.  If goals were not met, try again with easier goals.
  • Bad Test Automation Objects– And Why:
    • Reduce the number of bugs found by users – Exploratory testing is much more effective at finding bugs.
    • Run tests faster – Automation will probably run tests slower if you include the time it takes to write, maintain, and interpret the results.  The only testing activity automation might speed up is “test execution”.
    • Improve our testing – The testing needs to be improved before automation even begins.  If not, you will have poor automation.  If you want to improve your testing, try just looking at your testing.
    • Reduce the cost and time for test design – Automation will increase it.
    • Run regression tests overnight and on weekends – If your automated tests suck, this goal will do you no good.  You will learn very little about your product overnight and on weekends.
    • Automate all tests – Why not just automated the ones you want to automate?
    • Find bugs quicker – It’s not the automation that finds the bugs, it’s the tests.  Tests do not have to be automated, they can also be run manually.
  • The thing I really like about Dorothy’s examples above, is that she helps us separate the testing activity from the automation activity.  It helps us avoid common mistakes, such as forgetting to focus on the tests first.
  • Good Test Automation Objectives:
    • Free testers from repetitive test execution to spend more time on test design and exploratory testing – Yes!  Say no more!
    • Provide better repeatability of regression tests – Machines are good checkers.  These checks may tell you if something unexpected has changed.
    • Provide test coverage for tests not feasible for humans to execute – Without automation, we couldn’t get this information.
    • Build an automation framework that is easy to maintain and easy to add new tests to.
    • Run the most useful tests, using under-used computer resources, when possible – This is a better objective than running tests on weekends.
    • Automate the most useful and valuable tests, as identified by the testers – much better than “automated all tests”.

Last week, at STARCanada, I met several enthusiastic testers who might make great testing conference speakers.  We need you.  Life is too short for crappy conference talks.

I’m no pro by any means.  But I have been a track speaker at STARWest,  STARCanada, STPCon, and will be speaking at STAREast in 2 weeks. 

Ready to give it a go?  Here is my advice on procuring your first speaking slot:

  1. Get some public speaking experience.  They are probably not going to pick you without speaking experience.  If you need experience, try speaking to a group of testers at your own company, at an IT group that meets within your city, volunteer for an emerging topic talk or sign up for a lightning talk at a conference that offers those, like CAST.
  2. Come up with a killer topic.  See what speakers are currently talking about and talk about something fresh.  Make sure your topic can appeal to a wider audience.  Experience reports seem appealing.
  3. Referrals – meet some speakers or industry leaders with some clout and ask them to review your talk.  If they like it, maybe they would consider putting in a good word for you.
  4. Pick one or more conferences and search for their speaker submission deadlines and forms (e.g., Speaking At SQE Conferences).  If you’ve attended conferences, you are probably already on their mailing list and may be receiving said requests.  I’m guessing the 2014 SQE conference speaker submission will open in a few months.
  5. Submit the speaker submission form.  Make sure you have an interesting sounding title.  You’ll be asked for a summary of your talk including take-aways and maybe how you intend to give it.  This is a good place to offer something creative about the way you will deliver your topic (e.g., you made a short video, you will do a hands-on group exercise).
  6. Wait.  Eventually you’ll receive a call or email.  Sound competent.  Know your topic and be prepared to answer tough questions about it.
  7. If you get rejected.  Politely ask what you could do differently to have a better chance of getting picked in the future.

It is not easy to get picked.  I was rejected several times and eventually got a nice referral from Lynn McKee, an experienced speaker with a great reputation; that helped.  One of my friends and colleagues, who is far more capable than I am, IMO, has yet to get picked up as a speaker.  So I don’t know what secret sauce they are looking for.

Good luck!


BTW - Speaking at conferences has both advantages and disadvantages to consider.


  • The opportunity to build your reputation as an expert of sorts in the testing community.
  • It helps you refine your ideas and possibly spread knowledge.
  • Free registration fees.  This makes it more likely your company will pay your hotel/travel costs and let you attend.


  • Public speaking is scary as hell for most of us.  The weeks leading up to a conference can be stressful.
  • Putting together good talks and practicing takes lots of time.  I took days off work to prepare.

Don’t you just hate it when your Business Analysts (or others) beat you to it and point out bugs before you have a chance to?

It feels so unfair!  They can send an email that says, “the columns aren’t in the right order, please fix it” and the programmers snap to attention like good little soldiers.  Whereas, you saw the same problem but you are investigating further and confirming your findings with multiple oracles.

Well, this is not a bug race.  There is no “my bug”.  If someone else on your team is reporting problems, this helps you.  And it certainly helps the team.  You may want to observe the types of things these non-testers report and adjust your testing to target other testing.

But try to convert your frustration to admiration.  Tell them “nice catch” and “thanks for the help”.  Encourage more of the same.

I’m not asking if they *can* run unattended.  I’m asking if they do run unattended…consistently…without ever failing to start, hanging, or requiring any human intervention whatsoever…EVER.
Automators, be careful.  If you tell too many stories about unattended check-suite runs, the non-automators just might start believing you.  And guess what will happen if they start running your checks?  You know that sound when Pac-Man dies, that’s what they’ll think of your automated checks.
I remember hearing a QA Director attempt to encourage “test automation” by telling fantastical stories of his tester past:
“We used to kick off our automated tests at 2PM and then go home for the day.  The next day, we would just look at the execution results and be done.”
Years later, I’ve learned to be cynical about said stories.  In fact, I have yet to see an automated test suite (including my own) that consistently runs without ever requiring the slightest intervention from humans, who unknowingly may:

  • Prep the test environment “just right” before clicking “Run”.
  • Restart the suite when it hangs and hope the anomaly goes away.
  • Re-run the failed checks because they normally pass on the next attempt.
  • Realize the suite works better when kicked off in smaller chunks.
  • Recognize that sweet spot, between server maintenance windows, where the checks have a history of happily running without hardware interruptions.
IMO, it’s not a problem if the automator has to periodically do one or more of the above.  It’s only a problem if we, as automators, spread untruths about the real effort behind our automated checks.

If it’s possible to determine which Feature/Story/Requirement introduced a new bug, it’s probably valuable to use a bug report’s “Link” attribute to link the bug report to said parent Feature/Story/Requirement.  Here are some reasons this is valuable:

  • It tells the programmers roughly where the bug was created.
  • It may help to decouple “Escapes” from “New Feature Bugs”.  Escapes are usually more difficult to trace back to specific Features.  Perhaps your team does not count linked bugs as part of WIP but unlinked Escapes are counted as part of WIP.
  • It tells the team the bug is a dependency to its linked requirement’s deployment (e.g., the bug will follow the Feature into other environments if it is not fixed).
  • If you can link to the Feature from the bug report, the Feature may provide more context for the bug report.

Another way I like to use the bug report “Link” attribute is to associate bugs to each other.  When BugA get’s fixed, it introduces BugB; linking the two together allows us to use briefer language in the bug report like, “this bug was created by the linked bug’s fix”.  Generally the link itself makes it easier to view the linked bug report, than merely referencing the Bug Report ID.

Two other bug report attributes I find useless are “Version” and “Iteration”.  We no longer bother to populate these.

I used to think these were important attributes because the team could use them to answer questions like:

  • How many bugs did we find in Iteration 16?
  • We think Bug1001 is fixed in version 1.2.7.  What version were you testing when you found Bug1001? Oh, you were testing version 1.2.6, that explains it.

Now days, I realize counting bugs found in test is not a helpful measure; especially since we’ve focused more testing in the Dev environment and often fix bugs without logging bug reports.  In addition, many of my project teams have switched to Kanban so “Iteration” is a seldom used term.

Regarding the second bullet above, I came to realize that most bug report templates have “Created Date”, an auto-populated attribute.  I also learned every version of the software under test has an auto-populated build deployment history.  If we cross-reference a bug report’s created date with our build deployment history, we can always identify the version or iteration of the code the bug was found in.  I would rather fall back on existing information (in the rare cases we need it), than capture extra information every time (that normally gets ignored).

In practice, questions like the second bullet above, never get asked.  As long as one populates the Environment bug report attribute, confusion rarely occurs.

Warning: this post has almost nothing to do with testing and it barely has anything to do with software development.  Managers should read it however.

Last night, at the Atlanta Scrum Users Group, I saw Peter Saddington’s talk, “The New Role of Management for High-Performance Teams”.  Peter has three master’s degrees and claims to be Atlanta’s only Certified Scrum Trainer.

Here are some highlights from my notes:

  • Managers should see themselves as “managers of inspiration”. Don’t manage issues.  Instead, manage inspiration.  Help people love what they do first, then you don’t need to manage them.
  • Everyone can improve their job performance by taking time to reflect.  Few bother to, because they think they are too busy.
  • Stop creating processes.  Instead, change the rules as you go.  The problem with process is that some people will thrive under it and others will die.  There are no “best practices”; (Context-driven testers have been saying this for years).
  • The most important question you can ask your directs is “Are you having fun?”.  Happier employees are more productive.
    • Play and fun at work have been declining for 30 years (in the US).
    • Burn-out rate has been increasing for 30 years (in the US).
  • Myth – Agile teams should be self-organizing.  Fact, marriages are about the only true self-organizing teams that exist; only about 50% are successful (in the US).  Instead of hoping your teams self-organize their way to success, get to know your people and put them on teams that make sense for them.  Try re-interviewing everyone.
  • If you learn 3 things about a co-worker’s personal life, trust is increase by 60%.  “How did Becky do at her soccer game yesterday?”
  • Motivate your teams with these three things:
    • Autonomy – People should not have to give it up when they go to work.
    • Mastery – Ability to grow one’s craft.  Help people make this happen.  Put people in places where they can improve their work.
    • Purpose – People do their best work when they know why they are doing it.
  • Any manager who asks their directs to work on multiple projects at once, should be fired.  Study after study shows that multi-tasking and switching contexts burns people out and causes them to work poorly. 

Peter did a fun group exercise to drive home that last point.  He had some of us stand in a circle and take turns saying the alphabet or counting by multiples of 3 or 5.  He began forcing us to switch patterns on the fly, as we worked.  Afterwards, we all hated him and his stupid exercise.  …He was representing a manager.

My bug report template includes a Severity attribute but my teams don’t bother populating it.  The severity choices on my template are:


We leave it on the default.

Try as you may, to objectively define Severity, and it is still arguably subjective.  “Loss of Functionality w/Work Around”…well, if we are creative enough, we can always come up with a work around; let’s use the legacy process.  “Data Corruption”…well, if we run a DB script to fix the corruption is this bug still severe?

From my experiences, it has been better for humans to read the bug report description, understand the bug, then make any decisions that would have otherwise been made based on a tester’s severity assessment. 

As an example, if the bug report description does not indicate the system crashes, and it does, it is likely a poorly written bug description.  One shouldn’t need a Severity to pigeon hole it into.

My advice?  Save the tester some time.  Don’t ask them to populate Severity.  Benefit from the discussion it may force later.

IMO, Priority should not be populated by testers.  My teams use a customized version of Microsoft Team Foundation Server’s bug work item template.  For whatever reason, Priority is a required attribute upon logging bugs.  It defaults to “Medium” and I never change it.

From my experiences, testers often overstate bug priority, wanting to believe the bugs they found are more important to fix than other work that could be done.  Some testers see themselves as the saviors of the user-in-distress.  I see myself as the information gatherer for my development team and stakeholders.  I don’t understand the business needs as well as my stakeholders, thus I remove myself from making claims about bug priority.

  • Priority is a stakeholder question and it’s always relative to what else is available to work on.  A High priority bug may be less important than a new Feature.
  • From my experiences, Priority does not lead decisions.  It follows.
    • Tester: “Per our process, we will only patch production for High priority bugs”.
    • Stakeholder: “Well, obviously we need to patch production today”. 
    • Tester: “But said bug is only a Medium priority”.
    • Stakeholder: “Then change it to a High”.
  • IMO, Priority is all but useless.  The more High priority active bugs one has, the more diminished it’s label becomes.  A better label is “Order”, as in let’s rank everything we can work on, from most important to least important, where each item has a unique ranking order.

Reader, Srinivas Kadiyala, asked if I could share my bug report template.  A bug report template high level view might be boring so instead let’s examine bug report attributes individually.  Here’s Part 1.


Environment can be more or less useful depending on how one populates it.  Some testers equate Environment to the environment the bug was found in.  IMO, Environment is more useful when it equates to the lowest active version of our software the bug exists in.

If a skilled tester finds a bug in the QA environment, they will probably  next determine if it exists in the production environment.  If it does, IMO the bug report should have Environment = Production (even though the bug was originally found in QA). 

Now we have helped answer/trigger important questions for the team:

  • Yikes!  Should we patch production ASAP?
  • Are we going to break something in production if we deploy the QA build without fixing this bug? 
  • How many bugs have escaped into production? We can query on this attribute to provide one of our most important measurements.

In response to my Don’t Forget To Look Before You Log post, bugs4tester asked how to prevent duplicate bug reports in large project teams that have accumulated >300 bug reports.

That's a pretty good question.  Here are some things that come to mind:

  • Consider not logging some bugs.  One of my project teams does all new feature testing in our development environment.  The bugs get fixed as they are found, so bug reports are not necessary.  Exceptions include:
    • Bugs we decide not to fix.
    • Bugs found in our QA environment.  We are SOX compliant and the auditors like seeing bug reports to prove the code change is necessary.
    • “Escapes” – Bugs found in production.
  • Readers Ken and Camal Cakar suggested it may be better to err on the side of logging duplicate bug reports, than taking the time to go dupe hunting or worse, mistakenly assuming the bug is already logged.  I agree.  Maybe we can use a 120-Second-Dupe-BugReport-Search heuristic; “If I can’t determine whether or not this bug is logged within 120 seconds, I will log it.”
  • Yes, it takes time to "look before you log", but you may gain that time back.  If, every so often, you find that a bug report already exists, you are saving the time it would have taken you to log the bug.  You are also saving the time it would have taken other team members encountering the bug report to sort through their confusion (e.g., “Hey, didn’t we already fix this bug?”).  IMO, dupes cause people to stare at each, trying to determine the difference, long after their context has faded.
  • "Look before you log" time can be reduced with bug report repository organization.  Examples include:
    • Can you assign bug reports to modules or other categories?
    • Can the team agree on a standard naming scheme? For example: always list the name of the screen or report in the bug report title.
    • Does your bug repository provide a keyword search that can only search bug report Title or Description?   If not, can you access the bug repository DB to write your own?
    • Can you use keyboard shortcuts or assign hotkeys to dupe bug report searches? 
  • Sometimes you don’t have to “look before you log”.  When testing new functionality, I think most testers know when they have discovered a bug that could not have existed with prior code.  On the other hand, some testers can recognize recurring bugs that have been around for years; in these cases the tester may already know it is logged.

Thanks for the fun question.  I hope one of my suggestions helps.

Shortly after logging a bug this morning, my business analyst kindly asked, “Is this the same as Bug10223, that I logged in October of 2011?”.

…It was!

Ordinarily, prior to logging production bugs that have been around for a while, I run a bug report query to search for open bug reports by keywords.  It takes about 10 seconds to determine if the bug I’m about to log has already been logged.  This morning I got lazy (and cocky) and just logged it.

It probably took 20 minutes of my time and my business analyst’s time to create a bug report that already existed, communicate about the confusion, and reject my duplicate bug report.

Look before you Log!

This sucks.  I’ve been testing all day and I haven't found a single problem. 

No, wait…

This is good, right? Clean software is the goal.  Alright, cool, we rock!  Looks like we’re deploying to prod tomorrow morning…just one more test…Dammit! I just found a problem!  I hate finding problems at the final hour.  This sucks.

No, wait…

This is good, right?  Better to have caught it in QA today than in prod tomorrow.  That’s what they pay me for.  Hey, here’s another bug!  And another!  I rock.  I just found the mother load of bugs.  This is awesome!!!

No, wait…

This is bad, right?  We’re either going to have to work late or delay tomorrow’s prod release.  I totally should have caught these problems earlier, it would have been so much cheaper.  I suck. 

What’s that?  The product owners are rejecting my bugs?  Really?  How humiliating.  I hate when my bugs get rejected!

No, wait…

This is good, right? It’s great that my bugs got rejected.  Less churn.  Now I don’t have to retest everything.

No, wait…I want to retest everything.

No, wait…maybe I don’t.


Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.