Test Planning Is Throwaway, Testing Is Forever
4 comments Posted by Eric Jacobson at Monday, February 29, 2016FeatureA will be ready to test soon. You may want to think about how you will test FeatureA. Let’s call this activity “Test Planning”. In Test Planning, you are not actually interacting with the product-under-test. You are thinking about how you might do it. Your Test Planning might include, but is not limited to, the following:
- Make a list of test ideas you can think of. A Test Idea is the smallest amount of information that can capture the essence of a test.
- Grok FeatureA: Analyze the requirements document. Talk to available people.
- Interact with the product-under-test before it includes FeatureA.
- Prepare the test environment data and configurations you will use to test.
- Note any specific test data you will use.
- Determine what testing you will need help with (e.g., testing someone else should do).
- Determine what not to test.
- Share your test plan with anyone who might care. At least share the test ideas (first bullet) with the product programmers while they code.
- If using automation, design the check(s). Stub them out.
All the above are Test Planning activities. About four of the above resulted in something you wrote down. If you wrote them in one place, you have an artifact. The artifact can be thought of as a Test Plan. As you begin testing (interacting with the product-under-test), I think you can use the Test Plan one of two ways:
- Morph it into “Test Notes” (or “Test Results”).
- Refer to it then throw it away.
Either way, we don’t need the Test Plan after the testing. Just like we don’t need those other above Test Planning activities after the testing. Plans are more useful before the thing they plan.
Execution is more valuable than a plan. A goal of a skilled tester is to report on what was learned during testing. The Test Notes are an excellent way to do this. Attach the Test Notes to your User Story. Test Planning is throwaway.
The Value of Merely Imagining a Test – Part 1
3 comments Posted by Eric Jacobson at Thursday, November 12, 2015An import bug escaped into production this week. The root cause analysis took us to the usual place; “If we had more test time, we would have caught it.”
I’ve been down this road so many times, I’m beginning to see things differently. No, even with more test time we probably would not have caught it. Said bug would have only been caught via a rigorous end-to-end test that would have arguably been several times more expensive than this showstopper production bug will be to fix.
Our reasonable end-to-end tests include so many fakes (to simulate production) that their net just isn’t big enough.
However, I suspect a mental end-to-end walkthrough, without fakes, may have caught the bug. And possibly, attention to the “follow-through” may have been sufficient. The “follow-through” is a term I first heard Microsoft’s famous tester, Michael Hunter, use. The “follow-through” is what might happen next, per the end state of some test you just performed.
Let’s unpack that: Pick any test, let’s say you test a feature to allow a user to add a product to an online store. You test the hell out of it until you reach a stopping point. What’s the follow-on test? The follow-on test is to see what can happen to that product once it has been added to the online store. You can buy it, you can delete it, you can let it get stale, you can discount it, etc… I’m thinking nearly every test has several follow-on tests.
About five years ago, my tester friend, Alex Kell, blew my mind by cockily declaring, “Why would you ever log a bug? Just send the Story back.”
Okay.
My dev team uses a Kanban board that includes “In Testing” and “In Development” columns. Sometimes bug reports are created against Stories. But other times Stories are just sent left; For example, a Story “In Testing” may have its status changed to “In Development”, like Alex Kell’s maneuver above. This normally is done using the Dead Horse When-To-Stop-A-Test Heuristic. We could also send an “In Development” story left if we decide the business rules need to be firmed up before coding can continue.
So how does one know when to log a bug report vs. send it left?
I proposed the following heuristic to my team today:
If the Acceptance Test Criteria (listed on the Story card) is violated, send it left. It seems to me, logging a bug report for something already stated in the Story (e.g., Feature, Work Item, Spec) is mostly a waste of time.
Thoughts?
Test This #8 - The Follow-On Journey
0 comments Posted by Eric Jacobson at Thursday, August 21, 2014While reading Duncan Nisbet’s TDD For Testers article, I stumbled on a neat term he used, “follow-on journey”.
For me, the follow-on journey is a test idea trigger for something I otherwise would have just called regression testing. I guess “Follow-on journey” would fall under the umbrella of regression testing but it’s more specific and helps me quickly consider the next best tests I might execute.
Here is a generic example:
Your e-commerce product-under-test has a new interface that allows users to enter sales items into inventory by scanning their barcode. Detailed specs provide us with lots of business logic that must take place to populate each sales item upon scanning its barcode. After testing the new sales item input process, we should consider testing the follow-on journey; what happens if we order sales items ingested via the new barcode scanner?
I used said term to communicate test planning with another tester earlier today. The mental image of an affected object’s potential journeys helped us leap to some cool tests.
“Bug In The Test” vs. “Bug In The Product”
5 comments Posted by Eric Jacobson at Wednesday, July 09, 2014Dear Test Automators,
The next time you discuss automation results, please consider qualifying the context of the word “bug”.
If automation fails, it means one of two things:
- There is a bug in the product-under-test.
- There is a bug in the automation.
The former is waaaaaay more important than the latter. Maybe not to you, but certainly for your audience.
Instead of saying,
“This automated check failed”,
consider saying,
“This automated check failed because of a bug in the product-under-test”.
Instead of saying,
“I’m working on a bug”,
consider saying,
“I’m working on a bug in the automation”.
Your world is arguably more complex than that of testers who don’t use automation. You must test twice as many programs (the automation and the product-under-test). Please consider being precise when you communicate.
Is there a name for this? If not, I’m going to call it a “fire drill test”.
- A fire drill test would typically not be automated because it will probably only be used once.
- A fire drill test informs product design so it may be worth executing early.
- A fire drill test might be a good test candidate to delegated to a project team programmer.
Fire drill test examples:
- Our product ingests files from an ftp site daily. What if the files are not available for three days? Can our product catch up gracefully?
- Our product outputs a file to a shared directory. What if someone removes write permission to the shared directory for our product?
- Our product uses a nightly job to process data? If the nightly job fails due to off-hour server maintenance, how will we know? How will we recover?
- Our product displays data from an external web service. What happens if the web service is down?
Too often, us testers have so much functional testing to do, we overlook the non-functional testing or save it for the end. If we give these non-functional tests a catchy name like “Fire Drill Test”, maybe it will help us remember them during test brainstorming.
“Exploring” vs. Checking Almost Did It For Me
4 comments Posted by Eric Jacobson at Thursday, August 29, 2013After watching Elisabeth Hendrickson’s CAST 2012 Keynote (I think), I briefly fell in love with her version of the “checking vs. testing” terminology. She says “checking vs. exploring” instead.
I love the simplicity. I imagine when used in public, most people can follow; “exploring” is a testing activity that can only be performed by humans, “checking” is a testing activity that is best performed by machines. And the beauty of said terms is…they’re both testing!!! Yes, automation engineers, all the cool stuff you build can still be called testing.
The thing I’ve always found awkward about the Michael Bolton/James Bach “checking vs. testing” terminology, is accepting that tests or testing can NOT be automated. Hendrickson’s version seems void of said awkwardness. She just says, “exploring” can NOT be automated…well sure, much easier to swallow.
The problem, I thought, was James and Michael’s testing definition was too narrow. Surely it could be expanded to include machine checks as testing. Thus, I set out to find common “Testing” definitions that would support my theory. And much to my surprise, I could not. All the definitions (e.g., Merriam-Webster) I read, described testing as an open-ended investigation…in other words, something that can NOT be automated.
Finally, I have to admit, Hendrickson’s term, “exploring” can be ambiguous. It might get confused with Exploratory Testing, which is a specific structured approach, as opposed to Ad Hoc testing, which is unstructured. Hmmm…Elisabeth, if you’re out there, I’m happy to listen to your definitions, perhaps you will change my mind.
So it seems, just when I thought I could finally wiggle away from their painful terminology, I am now squarely back in the James and Michael camp when it comes to “checking vs. testing”.
…Dang!
Last week we had an awesome Tester Lightning Talk session here at my company. Topics included:
- Mind Maps
- Cross-Browser Test Emulation
- How to Bribe Your Developers
- Performance Testing Defined
- Managing Multiple Agile Projects
- Integration Testing Sans Personal Pronouns
- Turning VSTS Test Results Files Into Test Reports
- Getting Back to Work After Leave
- Black Swans And Why Testers Should Care
The “Performance Testing Defined” talk inspired me to put my own twist on it and blog. Here goes…
The terms in the above graphic are often misused and interchanged. I will paraphrase from my lightning talk notes:
Baseline Testing – Less users than we expect in prod. This is like when manual testers perform a user scenario and use a stopwatch to time it. It could also be an automated load test where we are using less than the expected number of users to generate load.Load Testing – # of users we expect in prod. Real-world scenario. Realistic.
Stress Testing – More users than we expect in prod. Obscene amount of users. Used to determine the breaking point. After said test, the tester will be able to say “With more than 2000 users, the system starts to drag. With 5000 users, the system crashes.”
Stability Testing – Run the test continuously over a period of time (e.g., 24 hours, 1 week) to see if something happens. For example, you may find a memory leak.
Spike Testing – Think TicketMaster. What happens to your system when it suddenly jumps from 100 simultaneous users to 5000 simultaneous users for a short period of time?
There. Now you can talk like a performance tester and help your team discuss their needs.
As far as building these tests, at the most basic level, you really only need one check (AKA automated test). Said check should simulate something user-like, if possible. In the non-web-based world (which I live in) this check may be one or more service calls. In the non-web-based world, you probably do not want to use an automated check at the UI level; you would need an army of clients to load test. After all, your UI will only have a load of 1 user, right? What you’re concerned with is how the servers handle the load. So your check need only be concerned with the performance before the payload gets handed back to the client.
The check is probably the most challenging part of Performance testing. Once you have your check, the economies of scale begin. You can use that same check as the guts for most of your performance testing. The main variables in each are user load and duration.
Warning: I’m certainly an amateur when it comes to performance testing. Please chime in with your corrections and suggestions.
One of my tester colleagues and I had an engaging discussion the other day.
If a test failure is not caused by a problem in the system-under-test, should the tester bother to say the test failed?
My position is: No.
If a test fails but there is no problem with system-under-test, it seems to me it’s a bad test. Fix the test or ignore the results. Explaining that a test failure is nothing to be concerned with, gives the project team a net gain of nothing. (Note: If the failure has been published, my position changes; the failure should be explained).
The context of our discussion was the test automation space. I think test automaters, for some reason, feel compelled to announce automated check failures in one breath, and in the next, explain why these failures should not matter. “Two automated checks failed…but it’s because the data was not as expected, so I’m not concerned” or “ten automated checks are still failing but it’s because something in the system-under-test changed and the automated checks broke…so I’m not concerned”.
My guess is, project teams and stakeholders don’t care if tests passed or failed. They care about what those passes and failures reveal about the system-under-test. See the difference?
Did the investigation of the failed test reveal anything interesting about the system-under-test? If so, share what it revealed. The fact that the investigation was triggered by a bad test is not interesting.
If we’re not careful, Test Automation can warp our behavior. IMO, a good way of understanding how to behave in the test automation space, is to pretend your automated checks are sapient (AKA “manual”) tests. If a sapient tester gets different results than they expected, but later realizes their expectations were wrong, they don’t bother to explain their recent revelation to the project team. A sapient tester would not say, “I thought I found a problem, but then I realized I didn’t”? Does that help anyone?
Critical Thinking For Testers with Michael Bolton
1 comments Posted by Eric Jacobson at Wednesday, August 15, 2012After RST class (see my Four Day With Michael Bolton post), Bolton did a short critical thinking for testers workshop. If you get an opportunity to attend one of these at a conference or other place, it’s time well spent. The exercises were great, but I won’t blog about them because I don’t want to give them away. Here is what I found in my notes…
- There are two types of thinking:
- System 1 Thinking – You use it all the time to make quick answers. It works fine as long as things are not complex.
- System 2 Thinking – This thinking is lazy, you have to wake it up.
- If you want to be excellent at testing, you need to use System 2 Thinking. Testing is not a straight forward technical problem because we are creating stuff that is largely invisible.
- Don’t plan or execute tests until you obtain context about the test mission.
- Leaping to assumptions carries risk. Don’t build a network of assumptions.
- Avoid assumptions when:
- critical things depend on it
- when the assumption is unlikely to be true
- the assumption is dangerous when not declared
- Huh? Really? So? (James Bach’s critical thinking heuristic)
- Huh? – Do I really understand?
- Really? – How do I know what you say is true?
- So? – Is that the only solution?
- “Rule of Three” – If you haven't thought of at least three plausible explanations, you’re not thinking critically enough.
- Verbal Heuristics: Words to help you think critically and/or dig up hidden assumptions.
- Mary Had a Little Lamb Heuristic – emphasize each word in that phrase and see where it takes you.
- Change “the” to “a” Heuristic:
- “the killer bug” vs. “a killer bug”
- “the deadline” vs. “a deadline”
- “Unless” Heuristic: I’m done testing unless…you have other ideas
- “Except” Heuristic: Every test must have expected results except those we have no idea what to expect from.
- “So Far” Heuristic: I’m not aware of any problems…so far.
- “Yet” Heuristic: Repeatable tests are fundamentally more valuable, yet they never seem to find bugs.
- “Compared to what?” Heuristic: Repeatable tests are fundamentally more valuable…compared to what?
- A tester’s job is to preserve uncertainty when everyone around us is certain.
- “Safety Language” is a precise way of speaking which differentiates between observation and inference. Safety Language is a strong trigger for critical thinking.
- “You may be right” is a great way to end an argument.
- “It seems to me” is a great way to begin an observation.
- Instead of “you should do this” try “you may want to do this”.
- Instead of “it works” try “it meets the requirements to some degree”
- All the verbal heuristics above can help us speak precisely.
We Test To Find Out If Software *Can* Work
18 comments Posted by Eric Jacobson at Monday, July 09, 2012Yes, Michael Bolton is one of my biggest mentors. And you’ve read a lot of fanboy posts on this blog. But before I start spewing stuff from my RST notes, I want to post a disagreement I had with Michael Bolton (and RST). After a 15 minute discussion, he weakened my position. But I still disagree with this statement:
We don’t test to find out if something works. We test to find out if it doesn’t work.
Here is a reason I disagree: Knowing at least one way software can work, may be more valuable than knowing a thousand ways it can NOT work.
Example: Your product needs to help users cross a river. Which is more valuable to your users?
- “hey users, if you step on these exact rocks, you have a good chance of successfully crossing the river”
- “hey users, here are a bunch of ways you can NOT cross the river: jump across, swim past the alligators, use the old rickety bridge, swing across on a vine, drain the river, dig a tunnel under it, etc.”
Users only need it to work one way. And if it solves a big enough problem, IMO, those users will walk across the rocks.
Sure, finding the problems is important too. Really important! But if someone puts a gun to my head, and says I only get one test. It’s going to be a happy path test.
Bolton referred us to the following discussion between James Bach and Michael Kelly (http://michaeldkelly.com/media/ then click on “Is there a problem here?”). I thought it would change my mind, as most James Bach lessons do. It hasn’t…yet.
I might be wrong.
I figured it was time for a review of some modern testing terms. Feel free to challenge me if you don’t like my definitions, which are very conversational. I selected terms I find valuable and stayed away from terms I’m bored with (e.g., “Stress Testing”, “Smoke Testing”).
Afterwards, you can tell me what I’m missing. Maybe I’ll update the list. Here we go…
Tester – Never refer to yourself as QA. That’s old school. That’s a sign of an unskilled tester. By now, we know writing software is different than manufacturing cars. We know we don’t have the power to “assure” quality. If your title still has “QA” in it, convince your HR department to change it. Read this for more.
Sapient Tester – A brain-engaged tester. It is generally used to describe a skilled tester who focuses on human “testing” but uses machines for “checking”. See James Bach’s post.
Manual Tester – A brain-dead tester. Manual testers focus on “checking”.
Test (noun) – Something that can reveal new information. Something that takes place in one’s brain. Tests focus on exploration and learning. See Michael Bolton’s post.
Check – An observation, linked to a decision rule, resulting in a bit (e.g., Pass/Fail, True/False, Yes/No). Checks focus on confirmation. A check may be performed by a machine or a human. Repetition of the same check is best left to a machine, lest the tester becomes a “Manual Tester”, which is not cool. See Michael Bolton’s posts, start here.
Developer – It takes a tester, business analyst, and programmer to develop software; even if they’re just different hats on the same person. That means if you’re a tester, you’re also a developer.
Programmer – Person on the development team responsible for writing the product code. They write code that ships.
Prog – Short version of “Programmer”. See my post.
Test Automation Engineer – This is a Tester who specializes in writing automated checks. This is the best I have so far. But here are the problems I have with it. Test Automation Engineers are also programmers who write code. That means the term “Programmer” is ambiguous. A Test Automation Engineer has the word “Test” in their title when, arguably, a test can’t be automated.
Heuristic - a fallible method for solving a problem or making a decision. Like a rule of thumb. It's fallible though, so use it with care. Why is this term in a tester dictionary? Skilled testers use heuristics to make quick decisions during testing. For example: a tester may use a stopping heuristic to know when to stop a test or which test to execute next. Testers have begun capturing the way they solve problems and creating catchy labels for new heuristics. Said labels allow testers to share ideas with other testers. Example: the 'Just In Time Heuristic' reminds us to add test detail as late as possible, because things will change. Example: the' Jenga Heuristic' reminds us that if we remove too many dependencies from a test, it will easily fall down...instead, try removing one dependency at a time to determine the breaking point.
Test Report – Something a team member or manager may ask a tester for. The team member is asking for a summary of a tester’s findings thus far. Skilled testers will have a mnemonic like MCOASTER or MORE BATS to enable a quick and thorough response.
Context Driven Testing – an approach to software testing that values context. Example: when joining a new project, Context Driven testers will ask the team what level of documentation is required, as opposed to just writing a test plan because that is what they have always done. IMO, Context Driven testers are the innovators when it comes to software testing. They are the folks challenging us to think differently and adjust our approaches as the IT industry changes. See Context Driven Testing.
Bug – Something that bugs someone who matters.
Issue – It may result in a bug. We don’t have enough information to determine that yet.
Escape – A bug found in production. A bug that has “escaped” the test environment. Counting “escapes” may be more valuable than counting “bugs”.
Follow-on Bug – A bug resulting from a different bug. “we don’t need to log a bug report for BugA because it will go away when BugB gets fixed”. I first heard it used by Michael Hunter (I think).
Safety Language – Skilled testers use it to tell an honest accurate story of their testing and preserve uncertainty. Example: “This appears to meet the requirements to some degree”, “I may be wrong”. See my post.
Test Idea – less than 140 characters. Exact steps are not necessary. The essence of a test should be captured. Each test ideas should be unique among their set. The purpose is to plan a test session without spending too much time on details that may change. Test Ideas replace test cases on my team.
Test Case Fragment – see “Test Idea”. I think they are the same thing.
AUT – Application Under Test. The software testers are paid to test. See my post and read the comments to see why I like AUT better than competing terms.
Showstopper – An annoying label, usually used to define the priority of bugs. It is typically overused and results in making everything equally important. See my post.
Velocity, Magnitude, Story Points – Misunderstood measurements of work on agile development teams. Misunderstood because Agile consultants do such a poor job of explaining them. So just use these terms however you want and you will be no worse off than most Agile teams.
Session-Based-Test-Management (SBTM) – A structured approach to Exploratory Testing that helps testers be more accountable. It involves dividing up test work into time-based charters (i.e., missions), documenting your test session live, and reviewing your findings with a team member. The Bach brothers came up with this, I think. Best free SBTM tool, IMO, is Rapid Reporter.
Let’s Make Up Our Minds! PUT, SUT, or AUT?
4 comments Posted by Eric Jacobson at Wednesday, June 06, 2012Come on testers, let’s make up our minds and all agree on one term to refer to the software we are testing. The variety in use is ridiculous.
I’ve heard the following used by industry experts:
- PUT (Product Under Test)
- SUT (System Under Test)
- AUT (Application Under Test)
- Product, Software, Application, etc.
Today I declare “SUT” the best term for this purpose!
Here’s my reasoning: “PUT” could be mistaken for a word, not an acronym. “AUT” can’t easily be pronounced aloud. “SUT” could be translated to “Software Under Test” or “System Under Test”, but each honor the intent. The software we are paid to test is a “Product”…but so is Quick Test Pro, Visual Studio, and SQL Server.
“What’s the big deal with this term?” you ask. Without said term, we speak ambiguously to our team members because we operate and find bugs in all classes of software:
- the software we are paid to test
- the software we write to test the software we are paid to test (automation)
- the software we write our automation with (e.g., Selenium, Ruby)
- the software we launch the software we are paid to test from (e.g., Window7, iOS)
If we agree to be specific. Let’s also agree to use the same term. Please join me and start using “SUT”.
When bugs escape to production, does your team adjust?
We started using the following model on one of my projects. It appears to work fairly well. Every 60 days we meet and review the list of “escapes” (i.e., bugs found in production). For each escape, we ask the following questions:
- Could we do something to catch bugs of this nature?
- Is it worth the extra effort?
- If so, who will be responsible for said effort?
The answer to #1 is typically “yes”. Creative people are good at imagining ultimate testing. It’s especially easy when you already know the bug. There are some exceptions though. Some escapes can only be caught in production (e.g., a portion of our project is developed in production and has no test environment).
The answer to #2 is split between “yes” and “no”. We may say “yes” if the bug has escaped more than once, significantly impacts users, or when the extra effort is manageable. We may say “no” when a mechanism is in place to alert our team of the prod error; we can patch some of these escapes before they affect users, with less effort than required to catch them in non-prod environments.
The answer to #3 falls to Testers, Programmers, BAs, and sometimes both or all.
So…when bugs escape to production, does my team adjust? Sometimes.
Test Automation Scrum Meeting Ambiguity
4 comments Posted by Eric Jacobson at Thursday, April 12, 2012For those of you writing automated checks and giving scrum reports, status reports, test reports, or some other form of communication to your team, please watch your language…and I'm not talking about swearing.
You may not want to say, “I found a bunch of issues”, because sometimes when you say that, what you really mean is, “I found a bunch of issues in my automated check code” or “I found a bunch of issues in our product code”. Please be specific. There is a big difference and we may be assuming the wrong thing.
If you often do checking by writing automated checks, you may not want to say, “I’m working on FeatureA”, because what you really mean is “I’m writing the automated checks for FeatureA and I haven't executed them or learned anything about how FeatureA works yet” or “I’m testing FeatureA with the help of automated checks and so far I have discovered the following…”
The goal of writing automated checks is to interrogate the system under test (SUT), right? The goal is not just to have a bunch of automated checks. See the difference?
Although your team may be interested in your progress creating the automated checks, they are probably more interested in what the automated checks have helped you discover about the SUT.
It’s the testing, stupid. That’s why we hired you instead of another programmer.
Yesterday, a tester on my team gave an excellent presentation for our company’s monthly tester community talk. She talked about what she had learned about effective tester conversation skills in her 15 years as a tester. Here are some of my favorite take-aways:
- When sending an email or starting a verbal conversation to raise an issue, explain what you have done. Prove that you have actually tried concrete things. Explain what concrete things you have done. People appreciate knowing the effort you put into the test and sometimes spot problems.
- Replace pronouns with proper names. Even if the conversation thread’s focus is the Save button, don’t say, “when I click on it”, say “when I click on the Save button.”
- Before logging a bug, give your team the benefit of the doubt. Explain what you observe and see what they say. She said 50% of the time, things she initially suspects as bugs, are not bugs. For example: the BA may have discussed it with the developer and not communicated it back to the team yet.
- Asking questions rocks. You can do it one-on-one or in team meetings. One advantage of doing it in meetings is to spark other people’s brains.
- It’s okay to say “I don’t understand” in meetings. But if, after asking three times, you appear to be the only one in the meeting not understanding, stop asking! Save it for a one-on-one discussion so you don’t develop a reputation of wasting people’s time.
- Don’t speak in generalities. Be precise. Example:
- Don’t say, “nothing works”.
- Instead, pick one or two important things that don’t work, “the invoice totals appear incorrect and the app does not seem to close without using the Task Manager”.
- Know your team. If certain programmers have a rock solid reputation and don’t like being challenged, take some extra time to make sure you are right. Don’t waste their time. It hurts your credibility.
She had some beautiful exercises she took us through to reinforce the above points and others. My favorite was taking an email from a tester, and going through it piece-by-piece to improve it.
CAST2011 was full of tester heavy weights. Each time I sat down in the main gathering area, I picked a table with people I didn’t know. One of those times I happened to sit down next to thetesteye.com blogger Henrik Emilsson. After enjoying his conversation, I attended his Crafting Our Own Models of Software Quality track session.
CRUSSPIC STMPL (pronounced Krusspic Stemple)…I had heard James Bach mention his quality criteria model mnemonic years ago. CRUSSPIC represents operational quality criteria (i.e., Capability, Reliability, Usability, Security, Scalability, Performance, Installability, Compatibility). STMPL represents development quality criteria (i.e., Supportability, Testability, Maintainability, Portability, Localizability).
Despite how appealing it is to taste the phrase CRUSSPIC STMPL as it exercises the mouth, I had always considered it too abstract to benefit my testing.
Henrik, on the other hand, did not agree. He began his presentation quoting statistician George Edward Pelham Box, who said “…all models are wrong, but some are useful”. Henrik believes we should all create models that are better for our context.
With that, Henrik and his tester colleagues took Bach’s CRUSSPIC STMPL, and over the course of about a year, modified it to their own context. Their current model, CRUCSPIC STMP, is posted here. They spent countless hours reworking what each criterion means to them.
They also swapped out some of the criteria for their own. Of note, was swapping out the 4th “S” for a “C”; Charisma. When you think about some of your favorite software products, charisma probably plays an important role. Is it good-looking? Do you get hooked and have fun? Does the product have a compelling inception story (e.g., Facebook). And to take CRUCSPIC STMP further, Henrik has worked in nested mnemonics. The Charisma quality item descriptors are SPACE HEADS (i.e., Satisfaction, Professionalism, Attractiveness, Curiosity, Entrancement, Hype, Expectancy, Attitude, Directness, Story).
Impressive. But how practical is it?
After Henrik’s presentation, I have to admit, I’m convinced it has enough value for it’s efforts:
- Talking to customers - If quality is value to some person, a quality model can be used to help that person (customers/users) explain which quality criteria is most important to them. This, in turn, will guide the tester.
- Test idea triggers - Per Henrik, a great model inspires you to think for yourself.
- Evaluating test results – If Concurrency is a target quality criterion, did my test tell me anything about performing parallel tasks?
- Talking about testing – Reputation and integrity are important traits for skilled testers. When James Bach or Henrik Emilsson talk about testing, their intimate knowledge of their quality models gives them an air of sophistication that is hard to beat.
Yes, I’m inspired to build a quality criteria model. Thank you, Henrik!
From this post forward, I will attempt to use the term “Prog” to refer to programmers.
I read a lot of Michael Bolton and I agree, testers are developers too. So are the BAs. Testers, Programmers, and BAs all work together to develop the product. We all work on a software development team.
Before I understood the above, I used the word “Dev” as short hand for “developer” (meaning programmer). Now everybody on my development team says “Dev” (to reference programmers). It has been a struggle to change my team culture to get everyone to call them “programmers”. I’m completely failing and almost ready to switch back to “Dev”, myself. I don’t much like the word, “programmer”…too many syllables and letters.
Thus, I give it one last attempt. “Prog” is perfect! Please help me popularize this term. It’s clearly short for “programmer”, easy to spell, and fun to say. It also reminds me of “frog”, which is fitting because some progs are like frogs. They sit all day waiting for us to give them bugs.
We have a recurring conversation on both my project teams. Some Testers, Programmers, BAs believe certain work items are “testable” while others are not. For example, some testers believe a service is not “testable” until its UI component is complete. I’m sure most readers of this blog would disagree.
A more extreme example of a work item, believed by some to not be “testable”, is a work item for Programmer_A to review Programmer_B’s code. However, there are several ways to test that, right?
- Ask Programmer_A if they reviewed Programmer_B’s code. Did they find problems? Did they make suggestions? Did Programmer_B follow coding standards?
- Attend the review session.
- Install tracking software on Programmer_A’s PC that programmatically determines if said code was opened and navigated appropriately for a human to review.
- Ask Programmer_B what feedback they received from Programmer_A.
IMO, everything is testable to some extent. But that doesn’t mean everything should be tested. These are two completely different things. I don’t test everything I can test. I test everything I should test.
I firmly believe a skilled tester should have the freedom to decide which things they will spend time testing and when. In some cases it may make more sense to wait and test the service indirectly via the UI. In some cases it may make sense to verify that a programmer code review has occurred. But said decision should be made by the tester based on their available time and queue of other things to test.
We don’t need no stinkin’ “testable” flag. Everything is testable. Trust the tester.
Testers, Let’s Get Our Bug Language Correct
0 comments Posted by Eric Jacobson at Thursday, June 16, 2011Hey Testers, let’s start paying more attention to our bug language. If we start speaking properly, maybe the rest of the team will join in.
Bug vs. Bug Report:
We can start by noting the distinction between a bug and a bug report. When someone on the team says, “go write a bug for this”, what they really mean is “go write a bug report for this”. Right? They are NOT requesting that someone open the source code and actually write a logic error.
Bug vs. Bug Fix:
“Did you release the bug?”. They are either asking “did you release the actual bug to some environment?” or “did you release the bug fix?”.
Missing Context:
“Did you finish the bug?”. I hear this frequently. It could mean “did you finish fixing the bug?” or it could mean “did you finish logging the bug report?” or it could mean “did you finish testing the bug fix?”.
Bug State Ambiguity:
“I tested the bug”. Normally this means “I tested the bug fix.” However, sometimes it means “I reproduced the bug.”…as in “I tested to see if the bug still occurs”.
It only takes an instant to tack the word “fix” or “report” onto the word “bug”. Give it a try.

RSS