Invigorated by the comments in my last post, I’ll revisit the topic.

I don’t think we can increase our tester reputations by sticking to the credo:

“Raise every bug, no matter how trivial”

Notice, I’m using the language “raise” instead of “log”.  This is an effort to include teams that have matured to the point of replacing bug reports with conversations.  I used the term “share” in my previous post but I like “raise” better.  I think Michael Bolton uses it.

Here are a couple problems with said credo:

  1. Identifying bugs is so complex that one cannot commit to raising them all.  As we test, there are countless evaluations our brains are making;  “That screen seems slow today, that control might be better a hair to the right, why isn’t there a flag in the DB to persist that data?”.  We are constantly making decisions of which observations are worth spending time on.  The counter argument to my previous post seems to be, just raise everything and let the stakeholders decide.  I argue, everything is too much.  Instead, the more experience and skill a tester gains, the better she will know what to raise.  And yes, she should be raising a lot, documenting bugs/issues as quickly as she can.  I still think, with skill, she can skip the trivial ones.
  2. Raising trivial bugs hurts your reputation as a tester.  I facilitate bug triage meetings with product owners. Trivial bugs are often mocked before being rejected":  “Ha! Does this need to be fixed because it’s bugging the tester or the user?  Reject it!  Why would anyone log that?”.  Important bugs have the opposite reaction.  Sorry.  That’s the way it is.
  3. Time is finite.  If I’m testing something where bugs are rare, I’ll be more inclined to raise trivial bugs.  If I’m testing something where bugs are common, I’ll be more inclined to spend my time on (what I think) are the most important bugs.

It’s not the tester’s job to decide what is important.  Yes, in general I agree.  But I’m not dogmatic about this.  Maybe if I share some examples of trivial bugs (IMO), it will help:

  • Your product has an administrative screen that only can be used by a handful of tech support people.  They use it once a year.  As a tester, you notice the admin screen does not scroll with your scroll wheel.  Instead, one must use the scroll bar.  Trivial bug.
  • Your product includes a screen with two radio buttons.  You notice that if you toggle between the radio buttons 10 times and then try to close the screen less than a second later, a system error gets logged behind the scenes. Trivial bug.
  • Your product includes 100 different reports users can generate.  These have been in production for 5 years without user complaints.  You notice some of these reports include a horizontal line above the footer while others do not.  Trivial bug.
  • The stakeholders have given your development team 1 million dollars to build a new module.  They have expressed their expectations that all energy be spent on the new module and they do not want you working on any bugs in the legacy module unless they report the bug themselves and specifically request its fix.  You find a bug in the legacy module and can’t help but raise it…

You laugh, but the drive to raise bugs is stronger than you may think.  I would like to think there is more to our jobs than “Raise every bug, no matter how trivial”.

(Edit on 10/1/2014) Although too long, a better title would have been “You May Not Want To Tell Anyone About That Trivial Bug”.  Thanks, dear readers, for your comments.

It’s a bug, no doubt.  Yes, you are a super tester for finding it.  Pat yourself on the back.

Now come down off that pedestal and think about this.  By any stretch of the imagination, could that bug ever threaten the value of the product-under-test?  Could it threaten the value of your testing?  No?  Then swallow your pride and keep it to yourself. 

My thinking used to be: “I’ll just log it as low priority so we at least know it exists”.  As a manager, when testers came to me with trivial bugs, I used to give the easy answer, “Sure, they probably won’t fix it but log it anyway”.

Now I see things differently.  If a trivial bug gets logged, often…

  • a programmer sees the bug report and fixes it
  • a programmer sees the bug report and wonders why the tester is not testing more important things
  • a team member stumbles upon the bug report and has to spend 4 minutes reading it and understanding it before assigning some other attribute to it (like “deferred” or “rejected”)
  • a team member argues that it’s not worth fixing
  • a tester has spent 15 minutes documenting a trivial bug.

It seems to me, reporting trivial bugs tends to waste everybody’s time.  Time that may be better spent adding value to your product.  If you don’t buy that argument, how about this one:  Tester credibility is built on finding good bugs, not trivial ones.

About five years ago, my tester friend, Alex Kell, blew my mind by cockily declaring, “Why would you ever log a bug?  Just send the Story back.”

Okay.

My dev team uses a Kanban board that includes “In Testing” and “In Development” columns.  Sometimes bug reports are created against Stories.  But other times Stories are just sent left; For example, a Story “In Testing” may have its status changed to “In Development”, like Alex Kell’s maneuver above.  This normally is done using the Dead Horse When-To-Stop-A-Test Heuristic. We could also send an “In Development” story left if we decide the business rules need to be firmed up before coding can continue.

So how does one know when to log a bug report vs. send it left?

I proposed the following heuristic to my team today:

If the Acceptance Test Criteria (listed on the Story card) is violated, send it left.  It seems to me, logging a bug report for something already stated in the Story (e.g., Feature, Work Item, Spec) is mostly a waste of time.

Thoughts?

While reading Duncan Nisbet’s TDD For Testers article, I stumbled on a neat term he used, “follow-on journey”.

For me, the follow-on journey is a test idea trigger for something I otherwise would have just called regression testing.  I guess “Follow-on journey” would fall under the umbrella of regression testing but it’s more specific and helps me quickly consider the next best tests I might execute.

Here is a generic example:

Your e-commerce product-under-test has a new interface that allows users to enter sales items into inventory by scanning their barcode.  Detailed specs provide us with lots of business logic that must take place to populate each sales item upon scanning its barcode.  After testing the new sales item input process, we should consider testing the follow-on journey; what happens if we order sales items ingested via the new barcode scanner?

I used said term to communicate test planning with another tester earlier today.  The mental image of an affected object’s potential journeys helped us leap to some cool tests.

This efficiency didn’t occur to me until recently.  I was doing an exploratory test session and documenting my tests via Rapid Reporter.  My normal process had always been to document the test I was about to execute…

TEST: Edit element with unlinked parent

…execute the test.  Then write “PASS” or “FAIL” after it like this…

TEST: Edit element with unlinked parent – PASS

But it occurred to me that if a test appears to fail, I tag said failure as a “Bug”, “Issue”, “Question”, or “Next Time”.  As long as I do that consistently, there is no need to add “PASS” or “FAIL” to the documented tests.  While debriefing about my tests post session, the assumption will be that the test passed unless indicated otherwise.

Even though it felt like going to work without pants, after a few more sessions, it turns out, not resolving to “PASS” or “FAIL” reduces administrative time and causes no ambiguity during test reviews.  Cool!

Wait. It gets better.

On further analysis, resolving all my tests to “PASS” or “FAIL” may have prevented me from actual testing.  It was influencing me to frame everything as a check.  Real testing does not have to result in “PASS” or “FAIL”.  If I didn’t know what was supposed to happen after editing an element with an unlinked parent (as in the above example), well then it didn’t really “PASS” or “FAIL”, right?  However, I may have learned something important nevertheless, which made the test worth doing…I’m rambling.

The bottom line is, maybe you don’t need to indicate “PASS” or “FAIL”.  Try it.

Which of the above questions is more important for testers to ask?

Let’s say you are an is-there-a-problem-here tester: 

  • This calculator app works flawlessly as far as I can tell.  We’ve tested everything we can think of that might not work and everything we can think of that might work.  There appear to be no bugs.  Is there a problem here?  No.
  • This mileage tracker app crashes under a load of 1000 users.  Is there a problem here?  Yes.

But might the is-there-a-problem-here question get us into trouble sometimes?

  • This calculator app works flawlessly…but we actually needed a contact list app.
  • This mileage tracker app crashes under a load of 1000 users but only 1 user will use it.

Or perhaps the is-there-a-problem-here question only fails us when we use too narrow an interpretation:

  • Not meeting our needs, is a problem.  Is there a problem here?  Yes.  We developed the wrong product, a big problem.
  • A product that crashes under a load of 1000 users may actually not be a problem if we only need to support 1 user.  Is there a problem here?  No.

Both are excellent questions.  For me, the will-it-meet-our-needs question is easier to apply and I have a slight bias towards it.  I’ll use them both for balance.

Note: The “Will it meet our needs?” question came to me from a nice Pete Walen article.  The “Is there a problem here?” came to me via Michael Bolton.

I often hear people describe their automated test approach by naming the tool, framework, harness, technology, test runner, or structure/format.  I’ve described mine the same way.  It’s safe.  It’s simple.  It’s established.  “We use Cucumber”.

Lately, I’ve seen things differently.

Instead of trying to pigeon hole each automated check into a tightly controlled format for an entire project, why not design automated checks for each Story, based on their best fit for that story?

I think this notion comes from my context-driven test schooling.  Here’s an example:

On my current project, we said “let’s write BDD-style automated checks”.  We found it awkward to pigeon-hole many of our checks into Given, When, Then.  After eventually dropping the mandate for BDD-style, I discovered the not-as-natural-language style to be easier to read, more flexible, and quicker to author…for some Stories.  Some Stories are good candidates for data-driven checks authored via Excel.  Some might require manual testing with a mocked product...computer-assisted-exploratory-testing…another use of automation.  Other Stories might test better using non-deterministic automated diffs.

Sandboxing all your automated checks into FitNesse might make test execution easier.  But it might stifle test innovation.



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.