Acceptance Criteria. When User Stories have Acceptance Criteria (or Acceptance Tests), they can help us plan our exploratory and automated testing. But they can only go so far.
Four distinct Acceptance Criteria does not dictate four distinct test cases, automated or manual.
Here are three flavors of Acceptance Criteria abuse I’ve seen:
- Skilled testers use Acceptance Criteria as a warm-up, a means of getting better test ideas for deeper and wider coverage. The better test ideas are what need to be captured (by the tester) in the test documentation...not the Acceptance Criteria. The Acceptance Criteria is already captured, right? Don’t recapture it (see below). More importantly, try not to stop testing just because the Acceptance Criteria passes. Now that you’ve interacted with the product-under-test, what else can you think of?
- The worst kind of testing is when testers copy Acceptance Criteria from User Stories, paste it into a test case management tool, and resolve each to Pass/Fail. Why did you copy it? If you must resolve them to Pass/Fail, why not just write “Pass” or “Fail” next to the Acceptance Criteria in the User Story? Otherwise you have two sources. Someone is going to revise the User Story Acceptance Criteria and your test case management tool Acceptance Criteria instance is going to get stale.
- You don’t need to visually indicate that each of your distinct Acceptance Criteria has Passed or Failed. Your Agile team probably has a definition of “Done” that includes all Acceptance Criteria passing. That being said, if the User Story is marked Done, it means all the Acceptance Criteria passed. We will never open a completed User Story and ask, “which Acceptance Criteria passed or failed?”.
Don’t Bother Indicating “Pass” or “Fail”
1 comments Posted by Eric Jacobson at Tuesday, August 05, 2014This efficiency didn’t occur to me until recently. I was doing an exploratory test session and documenting my tests via Rapid Reporter. My normal process had always been to document the test I was about to execute…
TEST: Edit element with unlinked parent
…execute the test. Then write “PASS” or “FAIL” after it like this…
TEST: Edit element with unlinked parent – PASS
But it occurred to me that if a test appears to fail, I tag said failure as a “Bug”, “Issue”, “Question”, or “Next Time”. As long as I do that consistently, there is no need to add “PASS” or “FAIL” to the documented tests. While debriefing about my tests post session, the assumption will be that the test passed unless indicated otherwise.
Even though it felt like going to work without pants, after a few more sessions, it turns out, not resolving to “PASS” or “FAIL” reduces administrative time and causes no ambiguity during test reviews. Cool!
Wait. It gets better.
On further analysis, resolving all my tests to “PASS” or “FAIL” may have prevented me from actual testing. It was influencing me to frame everything as a check. Real testing does not have to result in “PASS” or “FAIL”. If I didn’t know what was supposed to happen after editing an element with an unlinked parent (as in the above example), well then it didn’t really “PASS” or “FAIL”, right? However, I may have learned something important nevertheless, which made the test worth doing…I’m rambling.
The bottom line is, maybe you don’t need to indicate “PASS” or “FAIL”. Try it.
Is there a name for this? If not, I’m going to call it a “fire drill test”.
- A fire drill test would typically not be automated because it will probably only be used once.
- A fire drill test informs product design so it may be worth executing early.
- A fire drill test might be a good test candidate to delegated to a project team programmer.
Fire drill test examples:
- Our product ingests files from an ftp site daily. What if the files are not available for three days? Can our product catch up gracefully?
- Our product outputs a file to a shared directory. What if someone removes write permission to the shared directory for our product?
- Our product uses a nightly job to process data? If the nightly job fails due to off-hour server maintenance, how will we know? How will we recover?
- Our product displays data from an external web service. What happens if the web service is down?
Too often, us testers have so much functional testing to do, we overlook the non-functional testing or save it for the end. If we give these non-functional tests a catchy name like “Fire Drill Test”, maybe it will help us remember them during test brainstorming.
When Poor Test Documentation Hurts
5 comments Posted by Eric Jacobson at Thursday, December 19, 2013I would much rather test than create test documents. Adam White once told me, “If you’re not operating the product, you’re not testing”.
It’s soooooo easy to skip all documentation and dive right in to the testing. It normally results in productive testing and nobody misses the documents. Until…three years later, when the prog makes a little change to a module that hasn’t been tested since. The team says the change is high risk and asks you which tests you executed three year ago and how long it took.
Fair questions. I think we, as testers, should be able to answer. Even the most minimal test documentation (e.g., test fragments written in notepad) should be able to answer those questions.
If we can’t answer relatively quickly, we may want to consider recording better test documentation.
If you made two lists for a given software feature (or user story):
- all the plausible user scenarios you could think of
- all the implausible user scenarios you could think of
…which list would be longer?
I’m going to say the latter. The user launches the product, holds down all the keys on the keyboard for four months, removes all the fonts from their OS, then attempts to save a value at the exact same time as one million other users. One can determine implausible user scenarios without obtaining domain knowledge.
Plausible scenarios should be easier to predict, by definition. It may be that only one out of 100 users stray from the “happy path”, in which case our product may have just experienced an implausible scenario.
What does this have to do with testing? As time becomes dearer, I continue to refine my test approach. It seems to me, the best tests to start with are still confirmatory (some call these “happy path”) tests. There are fewer of them, which makes it more natural to know when to start executing the tests for the scenarios less likely to occur.
The chart above is my attempt to illustrate the test approach model I have in my head. The Y axis is how plausible the test is (e.g., it is 100% likely that users will do this, it is 50% likely that users will do this). The X axis represents the test order (e.g., 1st test executed, 2nd test executed, etc.). The number of tests executed is relative.
Basically, I start with the most plausible tests, then shift my focus to the stuff that will rarely happen. These rare scenarios at the bottom of the chart above can continue forever as you move toward 0% plausibility, so I generally use the “Times Up” stopping heuristic. One can better tackle testing challenges with this model if one makes an effort to determine how users normally use the product.
I often hear contradictory voices in my head saying, “don’t start with confirmatory tests, the bugs are off the beaten path”. Okay, but are they really? If our definition of a bug is “something that bugs someone who matters”, then the problems I find on the bottom of the above chart’s line, may matter less than those found on the top. Someone who matters, may not venture to the bottom.
For more on my thoughts (and contrary thoughts) on this position see We Test To Find Out If Software *Can* Work.
We had a seemingly easy feature to test: users should be able to rearrange columns on a grid. My test approach was to just start rearranging columns at random
My colleague’s test approach was different. She gave herself a nonsensical user scenario to complete. Her scenario was to rearrange all the columns to appear in alphabetical order (by column header label) from left to right. Pretty stupid, I thought to myself. Will users ever do that? No. And it seems like a repetitive waste of time.
Since I had flat-lined with my own approach, I tried her nonsensical user scenario myself…figured I’d see how stupid it was. As I progressed through the completion of the nonsensical user scenario, it started opening test case doors:
- I’m getting good at this rearranging column thing, maybe I can go faster…wait a minute, what just happened?
- I’ve done this step so many times, maybe I can pay more attention to other attributes like the mouse cursor…oh, that’s interesting.
- There’s no confusion about what order I’ve placed the columns in, now I can easily check that they remained in that order.
- I’m done with letter “E”. I think I saw a column starting with a letter “F” off the screen on the far right. I’m going to have to use the horizontal scroll bar to get over there. What happens when I drag my “F” column from the right to the left and then off the screen?
Now I get it! The value in her nonsensical user scenario was to discover test cases she may not have otherwise discovered. And she did. She found problems placing a column halfway between the left-most and right-most columns.
A nonsensical user scenario gives us a task to go perform on the system under test. Having this task may open more doors than mere random testing.
Peace Of Mind Without Detailed Test Cases
0 comments Posted by Eric Jacobson at Monday, May 21, 2012In reference to my When Do We Need Detailed Test Cases? post, Roshni Prince asked:
“when we run multiple tests in our head… [without using detailed test cases] …how can we be really sure that we tested everything on the product by the end of the test cycle?”
Nice question, Roshni. I have two answers. The first takes your question literally.
- …We can’t. We’ll never test everything by the end of the test cycle. Heck, we’ll never test everything in an open-ended test cycle. But who cares? That’s not our goal.
- Now I’ll answer what I think you are really asking, which is “without detailed test cases, how can we be sure of our test coverage?”. We can’t be sure, but IMO, we can get close enough using one or more of the following approaches:
- Write “test ideas” (AKA test case fragments). These should be less than the size of a Tweet. These are faster than detailed test cases to write/read/execute and more flexible.
- Use Code Coverage software to visually analyze test coverage.
- Build a test matrix using Excel or another table.
- Use a mind map to write test ideas. Attach it to your specs for an artifact.
- Use a Session Based Test Management tool like Rapid Reporter to record test notes as you test.
- Use a natural method of documenting test coverage. By “natural” we mean, something that will not add extra administrative work. Regulatory compliance expert and tester, Griffin Jones, has used audio and/or video recordings of test sessions to pass rigorous audits. He burns these to DVD and has rock solid coverage information without the need for detailed test cases. Another approach is to use keystroke capture software.
- Finally, my favorite when circumstances allow; just remember! That’s right, just use your brain to remember what you tested. Brains rock! Brains are so underrated by our profession. This approach may help you shine when people are more interested in getting test results quickly and you only need to answer questions about what you tested in the immediate future…like today! IMO, the more you enjoy your work as a tester, the more you practice testing, the more you describe your tests to others, the better you’ll recall test coverage from your brain. And brains record way more than any detailed test cases could ever hope to.
In my Don’t Give Test Cases To N00bs post I tried to make the argument against writing test cases as a means to coaching new testers. At the risk of sounding like a test case hater, I would like to suggest three contexts that may benefit from detailed test cases.
These contexts do not include the case of a mandate (e.g., the stakeholder requires detailed test cases and you have no choice).
- Automated Check Design: Whether a sapient tester is designing an automated check for an automation engineer or an automation engineer is designing the automated check herself, detailed test cases may be a good idea. Writing detailed test cases will force tough decisions to be made prior to coding the check. Decisions like: How will I know if this check passes? How will I ensure this check’s dependent data exists? What state can I expect the product-under-test to be in before the check’s first action?
- Complex Business Process Flows: If your product-under-test supports multiple ways of accomplishing each step in its business process flows, you may want to spec out each test to keep track of test coverage. Example: Your product’s process to buy a new widget requires 3 steps. Each of the 3 steps has 10 options. Test1 may be: perform Step1 with Option4, perform Step2 with Option1, then perform Step3 with Option10.
- Bug Report Repro Steps: Give those programmers the exact foot prints to follow else they’ll reply, “works on my box”.
Those are the three contexts I write detailed test cases for. What about you?
In response to my What I Love About Kanban As A Tester #1 post, Anonymous stated:
“The whole purpose of documenting test cases…[is]…to be able to run [them] by testers who don’t have required knowledge of the functionality.”
Yeah, that’s what most of my prior test managers told me, too…
“if a new tester has to take over your testing responsibilities, they’ll need test cases”
I wouldn’t be surprised if a secret QA manager handbook went out to all QA managers, stating the above as the paramount purpose of test cases. It was only recently that I came to understand how wrong all those managers were.
Before I go on, let me clarify what I mean by “test cases”. When I say “test cases”, I’m talking about something with steps, like this:
- Drag ItemA from the catalog screen to the new order screen.
- Change the item quantity to “3” on the new order screen.
- Click the “Submit Order” button.
Here’s where I go on:
- When test cases sit around, they get stale. Everything changes…except your test cases. Giving these to n00bs is likely to result in false fails (and maybe even rejected bug reports).
- When test cases are blindly followed, we miss the house burning down right next to the house that just passed our inspection.
- When test cases are followed, we are only doing confirmatory testing. Even negative (AKA “unhappy”) paths are confirmatory testing. If that’s all we can do, we are one step closer to shutting down our careers as testers.
- Testing is waaaay more than following steps. To channel Bolton, a test is something that goes on in your brain. Testing is more than answering the question, “pass or fail?”. Testing is sometimes answering the question, “Is there a problem here?”.
- If our project mandates that testers follow test cases, for Pete’s sake, let the n00b’s write their own test cases. It may force them to learn the domain.
- Along with test cases comes administrative work. Perhaps time is better spent testing.
- If the goal is valuable testing from the n00b, wouldn’t that best be achieved by the lead tester coaching the n00b? And if that lead tester didn’t have to write test cases for a hypothetical n00b, wouldn’t that lead tester have more time to coach the hypothetical n00b, should she appear. Here’s a secret: she never will appear. You will have a stack of test cases that nobody cares about; not even your manager.
In my next post I’ll tell you when test cases might be a good idea.