About five years ago, my tester friend, Alex Kell, blew my mind by cockily declaring, “Why would you ever log a bug? Just send the Story back.”
My dev team uses a Kanban board that includes “In Testing” and “In Development” columns. Sometimes bug reports are created against Stories. But other times Stories are just sent left; For example, a Story “In Testing” may have its status changed to “In Development”, like Alex Kell’s maneuver above. This normally is done using the Dead Horse When-To-Stop-A-Test Heuristic. We could also send an “In Development” story left if we decide the business rules need to be firmed up before coding can continue.
So how does one know when to log a bug report vs. send it left?
I proposed the following heuristic to my team today:
If the Acceptance Test Criteria (listed on the Story card) is violated, send it left. It seems to me, logging a bug report for something already stated in the Story (e.g., Feature, Work Item, Spec) is mostly a waste of time.
While reading Duncan Nisbet’s TDD For Testers article, I stumbled on a neat term he used, “follow-on journey”.
For me, the follow-on journey is a test idea trigger for something I otherwise would have just called regression testing. I guess “Follow-on journey” would fall under the umbrella of regression testing but it’s more specific and helps me quickly consider the next best tests I might execute.
Here is a generic example:
Your e-commerce product-under-test has a new interface that allows users to enter sales items into inventory by scanning their barcode. Detailed specs provide us with lots of business logic that must take place to populate each sales item upon scanning its barcode. After testing the new sales item input process, we should consider testing the follow-on journey; what happens if we order sales items ingested via the new barcode scanner?
I used said term to communicate test planning with another tester earlier today. The mental image of an affected object’s potential journeys helped us leap to some cool tests.
This efficiency didn’t occur to me until recently. I was doing an exploratory test session and documenting my tests via Rapid Reporter. My normal process had always been to document the test I was about to execute…
TEST: Edit element with unlinked parent
…execute the test. Then write “PASS” or “FAIL” after it like this…
TEST: Edit element with unlinked parent – PASS
But it occurred to me that if a test appears to fail, I tag said failure as a “Bug”, “Issue”, “Question”, or “Next Time”. As long as I do that consistently, there is no need to add “PASS” or “FAIL” to the documented tests. While debriefing about my tests post session, the assumption will be that the test passed unless indicated otherwise.
Even though it felt like going to work without pants, after a few more sessions, it turns out, not resolving to “PASS” or “FAIL” reduces administrative time and causes no ambiguity during test reviews. Cool!
Wait. It gets better.
On further analysis, resolving all my tests to “PASS” or “FAIL” may have prevented me from actual testing. It was influencing me to frame everything as a check. Real testing does not have to result in “PASS” or “FAIL”. If I didn’t know what was supposed to happen after editing an element with an unlinked parent (as in the above example), well then it didn’t really “PASS” or “FAIL”, right? However, I may have learned something important nevertheless, which made the test worth doing…I’m rambling.
The bottom line is, maybe you don’t need to indicate “PASS” or “FAIL”. Try it.
Which of the above questions is more important for testers to ask?
Let’s say you are an is-there-a-problem-here tester:
- This calculator app works flawlessly as far as I can tell. We’ve tested everything we can think of that might not work and everything we can think of that might work. There appear to be no bugs. Is there a problem here? No.
- This mileage tracker app crashes under a load of 1000 users. Is there a problem here? Yes.
But might the is-there-a-problem-here question get us into trouble sometimes?
- This calculator app works flawlessly…but we actually needed a contact list app.
- This mileage tracker app crashes under a load of 1000 users but only 1 user will use it.
Or perhaps the is-there-a-problem-here question only fails us when we use too narrow an interpretation:
- Not meeting our needs, is a problem. Is there a problem here? Yes. We developed the wrong product, a big problem.
- A product that crashes under a load of 1000 users may actually not be a problem if we only need to support 1 user. Is there a problem here? No.
Both are excellent questions. For me, the will-it-meet-our-needs question is easier to apply and I have a slight bias towards it. I’ll use them both for balance.
Note: The “Will it meet our needs?” question came to me from a nice Pete Walen article. The “Is there a problem here?” came to me via Michael Bolton.
I often hear people describe their automated test approach by naming the tool, framework, harness, technology, test runner, or structure/format. I’ve described mine the same way. It’s safe. It’s simple. It’s established. “We use Cucumber”.
Lately, I’ve seen things differently.
Instead of trying to pigeon hole each automated check into a tightly controlled format for an entire project, why not design automated checks for each Story, based on their best fit for that story?
I think this notion comes from my context-driven test schooling. Here’s an example:
On my current project, we said “let’s write BDD-style automated checks”. We found it awkward to pigeon-hole many of our checks into Given, When, Then. After eventually dropping the mandate for BDD-style, I discovered the not-as-natural-language style to be easier to read, more flexible, and quicker to author…for some Stories. Some Stories are good candidates for data-driven checks authored via Excel. Some might require manual testing with a mocked product...computer-assisted-exploratory-testing…another use of automation. Other Stories might test better using non-deterministic automated diffs.
Sandboxing all your automated checks into FitNesse might make test execution easier. But it might stifle test innovation.
…may not be a good way to start testing.
I heard a programmer use this metaphor to describe the testing habits of a tester he had worked with.
As a tester, taking all test input variables to their extreme, may be an effective way to find bugs. However, it may not be an effective way to report bugs. Skilled testers will repeat the same test until they isolate the minimum variable(s) that cause the bug. Or using this metaphor, they may repeat the same test with all levels on the mixing board pulled down, except the one they are interested in observing.
Once identified, the skilled tester will repeat the test only changing the isolated variable, and accurately predict a pass or fail result.
Dear Test Automators,
The next time you discuss automation results, please consider qualifying the context of the word “bug”.
If automation fails, it means one of two things:
- There is a bug in the product-under-test.
- There is a bug in the automation.
The former is waaaaaay more important than the latter. Maybe not to you, but certainly for your audience.
Instead of saying,
“This automated check failed”,
“This automated check failed because of a bug in the product-under-test”.
Instead of saying,
“I’m working on a bug”,
“I’m working on a bug in the automation”.
Your world is arguably more complex than that of testers who don’t use automation. You must test twice as many programs (the automation and the product-under-test). Please consider being precise when you communicate.