The Power To Declare Something Is NOT A Bug
17 comments Posted by Eric Jacobson at Thursday, April 24, 2014Many think testers have the power to declare something as a bug. This normally goes without saying. How about the inverse?
Should testers be given the power to declare something is NOT a bug?
Well…no, IMO. That sounds dangerous because what if the tester is wrong? I think many will agree with me. Michael Bolton asked the above question in response to a commenter on this post. It really gave me pause.
For me, it means maybe testers should not be given the power to run around declaring things as bugs either. They should instead raise the possibility that something may be a problem. Then I suppose they could raise the possibility something may not be a problem.
The second thing (here is the first) Scott Barber said that stayed with me is this:
The more removed people are from IT workers, the higher their desire for metrics. To paraphrase Scott, “the managers on the floor, in the cube farms, agile spaces or otherwise with their teams most of the time, don’t use a lot of metrics because they just feel what’s going on.”
It seems to me, those higher up people dealing with multiple projects don’t have (as much) time to visit the cube farms and they know summarized information is the quickest way to learn something. The problem is, too many of them think:
SUMMARIZED INFORMATION = ROLLED UP NUMBERS
It hadn’t occurred to me until Scott said it. That, alone, does not make metrics bad. But it helps me to understand why I (as a test manager) don’t bother with them but I spend a lot of time fending off requests for them from out-of-touch people (e.g., directors, other managers). Note: by “out-of-touch” I mean out-of-touch with the details of the workers. Not out-of-touch in general.
Scott reminds us the right way to find the right metric for your team is to start with the question:
What is it we’re trying to learn?
I love that. Maybe a metric is not the best way of learning. Maybe it is. If it is, perhaps coupling it with a story will help explain the true picture.
Thanks Scott!
Are Testers Too Busy to Leave the UI?
12 comments Posted by Eric Jacobson at Tuesday, August 27, 2013Per Elisabeth Hendrickson, I’m one of the 80% of test managers looking for testers with programming skills. And as I sift through tester resumes, attempting to fill two technical positions, I see a problem; testers with programming skills are few and far between!
About 90% of the resumes I’ve seen lately are for testers specialized in manual (sapient) testing of web-based products. And since most of these resumes are sprinkled with statements like “knowledge of QTP”, I assume most of these testers are doing all their testing via the UI.
And then it hit me…
Maybe the reason so many testers are specialized in manual testing via the UI is because there are so many UI bugs!
This is no scientific analysis by any means. Just a quick thought about the natural order of things. But here’s my attempt to answer the question of why there aren’t more testers with programming skills out there.
It may be because they’re too busy finding bugs in the UI layer of their products.
It’s a cliché, I know. But it really gave me pause when I heard Jeff “Cheezy” Morgan say it during his excellent STAReast track session, “Android Mobile Testing: Right Before Your Eyes”. He said something like, “instead of looking for bugs, why not focus on preventing them?”.
Cheezy demonstrated Acceptance Test Driven Development (ATDD) by giving a live demo, writing Ruby tests via Cucumber, for product code that didn’t exist. The tests failed until David Shah, Cheezy’s programmer, wrote the product code to make them pass.
(Actually, the tests never passed, which they later blamed on incompatible Ruby versions…ouch. But I’ll give these two guys the benefit of the doubt. )
Now back to my blog post title. I find this mindshift appealing for several reasons, some of which Cheezy pointed out and some of which he did not:
- Per Cheezy’s rough estimate 8/10 bugs involve the UI. There is tremendous benefit to the programmer knowing about these UI bugs while the programmer is writing the UI initially. Thus, why not have our testers begin performing exploratory testing before the Story is code complete?
- Programmers are often incentivized to get something code completed so the testers can have it (and so the programmers can work on the next thing). What if we could convince programmers it’s not code complete until it’s tested?
- Maybe the best time to review a Story is when the team is actually about to start working on it; not at the beginning of a Sprint. And what do we mean when we say the team is actually about to start working on it?
- First we (Tester, Programmer, Business Analyst) write a bunch of acceptance tests.
- Then, we start writing code as we start executing those tests.
- Yes, this is ATDD, but I don’t think automation is as important as the consultants say. More on that in a future post.
- Logging bugs is soooooo time consuming and can lead to dysfunction. The bug reports have to be managed and routed appropriately. People can’t help but count them and use them as measurements for something…success or failure. If we are doing bug prevention, we never need to create bug reports.
Okay, I’m starting to bore myself, so I’ll stop. Next time I want to explore Manual ATDD.
Managing Successful Test Automation – Part 2
1 comments Posted by Eric Jacobson at Monday, April 29, 2013- Measuring your Automation might be easy. Using those measurements is not. Examples:
- # of times a test ran
- how long tests take to run
- how much human effort was involved to execute and analyze results
- how much human effort was involved to automate the test
- number of automated tests
- EMTE (Equivalent Manual Test Effort) – What effort it would have taken humans to manually execute the same test being executed by a machine. Example: If it would take a human 2 hours, the EMTE is 2 hours.
- How can this measure be useful? It is an easy way to show management the benefits of automation (in a way managers can easily understand).
- How can this measure be abused? If we inflate EMTE by re-running automated tests just for the sake of increasing EMTE, when are misleading. Sure, we can run our automated tests everyday, but unless the build is changing every day, we are not adding much value.
- How else can this measure be abused? If you hide the fact that humans are capable of noticing and capturing much more than machines.
- How else can this measure be abused? If your automated tests can not be executed by humans and if your human tests can not be executed by a machine.
- ROI (Return On Investment) – Dorothy asked the students what ROI they had achieved with the automation they created. All 6 students who answered, got it wrong; they explained various benefits of their automation, but none were expressed as ROI. ROI should be a number, hopefully a positive number.
- ROI=(benefit-cost)/cost
- The trick is to convert tester time effort to money.
- ROI does not measure things like “faster execution”, “quicker time to market”, “test coverage”
- How can this measure be useful? Managers may think there is no benefit to automation until you tell them there is. ROI may be the only measure they want to hear.
- How is this measure not useful? ROI may not be important. It may not measure your success. “Automation is an enabler for success, not a cost reduction tool” – Yoram Mizrachi. You company probably hires lawyers without calculating their ROI.
- She did the usual tour of poor-to-better automation approaches (e.g., capture playback to advanced key-word driven framework). I’m bored by this so I have a gap in my notes.
- Testware architecture – consider separating your automation code from your tool, so you are not tied to the tool.
- Use pre and post processing to automate test setup, not just the tests. Everything should be automated except selecting which tests to run and analyzing the results.
- If you expect a test to fail, use the execution status “Expected Fail”, not “Fail”.
- Comparisons (i.e., asserts, verifications) can be “specific” or “sensitive”.
- Specific Comparison – an automated test only checks one thing.
- Sensitive Comparison – an automated test checks several things.
- I wrote “awesome” in my notes next to this: If your sensitive comparisons overlap, 4 tests might fail instead of 3 passing and 1 failing. IMO, this is one of the most interesting decisions an automator must make. I think it really separates the amateurs from the experts. Nicely explained, Dorothy!
Management For High Performance Agile Teams
6 comments Posted by Eric Jacobson at Thursday, February 28, 2013Warning: this post has almost nothing to do with testing and it barely has anything to do with software development. Managers should read it however.
Last night, at the Atlanta Scrum Users Group, I saw Peter Saddington’s talk, “The New Role of Management for High-Performance Teams”. Peter has three master’s degrees and claims to be Atlanta’s only Certified Scrum Trainer.
Here are some highlights from my notes:
- Managers should see themselves as “managers of inspiration”. Don’t manage issues. Instead, manage inspiration. Help people love what they do first, then you don’t need to manage them.
- Everyone can improve their job performance by taking time to reflect. Few bother to, because they think they are too busy.
- Stop creating processes. Instead, change the rules as you go. The problem with process is that some people will thrive under it and others will die. There are no “best practices”; (Context-driven testers have been saying this for years).
- The most important question you can ask your directs is “Are you having fun?”. Happier employees are more productive.
- Play and fun at work have been declining for 30 years (in the US).
- Burn-out rate has been increasing for 30 years (in the US).
- Myth – Agile teams should be self-organizing. Fact, marriages are about the only true self-organizing teams that exist; only about 50% are successful (in the US). Instead of hoping your teams self-organize their way to success, get to know your people and put them on teams that make sense for them. Try re-interviewing everyone.
- If you learn 3 things about a co-worker’s personal life, trust is increase by 60%. “How did Becky do at her soccer game yesterday?”
- Motivate your teams with these three things:
- Autonomy – People should not have to give it up when they go to work.
- Mastery – Ability to grow one’s craft. Help people make this happen. Put people in places where they can improve their work.
- Purpose – People do their best work when they know why they are doing it.
- Any manager who asks their directs to work on multiple projects at once, should be fired. Study after study shows that multi-tasking and switching contexts burns people out and causes them to work poorly.
Peter did a fun group exercise to drive home that last point. He had some of us stand in a circle and take turns saying the alphabet or counting by multiples of 3 or 5. He began forcing us to switch patterns on the fly, as we worked. Afterwards, we all hated him and his stupid exercise. …He was representing a manager.
Bite-Sized Test Wisdom From RST Class – Part 2
1 comments Posted by Eric Jacobson at Monday, July 23, 2012See Part 1 for intro.
- There are two reasons why your bugs are not getting fixed:
- There is more important stuff going on
- You are not explaining them well.
- Testers need to be good at articulating why we think something is a bug. One approach is PEW. State the Problem, provide an Example, explain Why it matters.
- “How many test cases?” is usually a silly question.
- There are two reasons why all tests do not get executed:
- The tester didn’t think of it.
- The tester thought of it but decided not to execute it. Hopefully it’s the latter. It may be worth while to brainstorm on tests.
- One way to communicate coverage to stakeholders is to use a mind map.
- If you get bored testing, you may be doing something wrong (e.g., you are doing repetitive tests, you are not finding anything interesting).
- Testing is about looking for a “problem”. A “problem” is an undesirable situation that is solvable.
- (I need to stop being so militant about this) All bugs don’t need repro steps. Repro steps may be expensive.
- Consider referencing your oracle (the way of recognizing a problem you used to find the bug) in your bug report.
- When asked to perform significantly time consuming or complex testing, consider the Orange Juice Test: A client asked three different hotels if the hotels could supply said client with two thousand glasses of fresh squeezed orange juice tomorrow morning. Hotel #1 said “no”. Hotel #2 said “yes”. Hotel #3 said “yes, but here’s what it’s going to cost you”. The client didn’t really want orange juice. They picked Hotel #3.
- No test can tell us about the future.
- Nobody really knows what 100% test coverage means. Therefore, it may not make sense to describe test coverage as a percentage. Instead, try explaining it as the extent to which we have travelled over some agreed upon map. And don’t talk about coverage unless you talk about the kind of coverage you are talking about (e.g., Functions, Platforms, Data, Time, etc.)
- Asking how long a testing phase should be is like asking how long I have to look out the windshield as I drive to Seattle.
- Skilled testers are like crime scene investigators. Testers are not in control (the police are). Testers give the police the information they need. If there is another crime committed, you may not have time to investigate as much with the current crime scene.
- No test can prove a theory is correct. A test can only disprove it.
- (I still have a hard time with this one) Exploratory Testing (ET) is not an activity that one can do. It is not a technique. It is an approach. A test is exploratory if the ideas are coming from the tester in the here and now. ET can be automated. Scripts come from exploration.
- Exploratory behavior = Value seeking.
- Scripted behavior = Task seeking
- Tests should not be concerned with the repeatability of computers. It’s important to induce variation.
- ET is a structured approach. One of the most important structures is the testing story. A skilled tester should be able to tell three stories:
- A story about the product (e.g., is the product any good?).
- A story about how you tested it (e.g., how do I know? Because I tested it by doing this…).
- A story about the value of the testing (e.g., here is why you should be pleased with my work…).
I finally pulled it off! My company brought Michael Bolton to teach a private 3-day Rapid Software Testing course and stick around for a 4th day of workshops and consulting. On the fourth day I had Michael meet with QA Managers to give his talk/discussion on “How to Get The Best Value From Testing”. Then he gave a talk for programmers, BAs, testers, and managers on “The Metrics Minefield”. Finally, he did a 2.5 hour workshop on “Critical Thinking for Testers”.
My brain and pen were going the whole four days; every other sentence he uttered held some bit of testing wisdom. I’ll post chunks of it in the near future. I attended the class 6 years earlier in Toronto and I was concerned it would have the same material but fortunately most of it had changed.
The conversations before/after class were a real treat too. After the first day, Claire Moss, Alex Kell, Michael Bolton, and I met at Fado for some Guinness, tester craic, and much to my surprise, to listen to Michael play mandolin in an Irish tradition music session. He happened to be a very good musician and (of course) gave us handles to tell a slip jig from a reel.
Several days later, I’m still haunted by Michael-Bolton-speak. I keep starting all my sentences with “it seems to me”. But best of all perhaps, is the lingering inspiration to read, challenge, and contribute thoughtful ideas to our testing craft. He got me charged up enough to love testing for at least another 6 years. Thanks, Michael!
Let’s Make Up Our Minds! PUT, SUT, or AUT?
4 comments Posted by Eric Jacobson at Wednesday, June 06, 2012Come on testers, let’s make up our minds and all agree on one term to refer to the software we are testing. The variety in use is ridiculous.
I’ve heard the following used by industry experts:
- PUT (Product Under Test)
- SUT (System Under Test)
- AUT (Application Under Test)
- Product, Software, Application, etc.
Today I declare “SUT” the best term for this purpose!
Here’s my reasoning: “PUT” could be mistaken for a word, not an acronym. “AUT” can’t easily be pronounced aloud. “SUT” could be translated to “Software Under Test” or “System Under Test”, but each honor the intent. The software we are paid to test is a “Product”…but so is Quick Test Pro, Visual Studio, and SQL Server.
“What’s the big deal with this term?” you ask. Without said term, we speak ambiguously to our team members because we operate and find bugs in all classes of software:
- the software we are paid to test
- the software we write to test the software we are paid to test (automation)
- the software we write our automation with (e.g., Selenium, Ruby)
- the software we launch the software we are paid to test from (e.g., Window7, iOS)
If we agree to be specific. Let’s also agree to use the same term. Please join me and start using “SUT”.
When bugs escape to production, does your team adjust?
We started using the following model on one of my projects. It appears to work fairly well. Every 60 days we meet and review the list of “escapes” (i.e., bugs found in production). For each escape, we ask the following questions:
- Could we do something to catch bugs of this nature?
- Is it worth the extra effort?
- If so, who will be responsible for said effort?
The answer to #1 is typically “yes”. Creative people are good at imagining ultimate testing. It’s especially easy when you already know the bug. There are some exceptions though. Some escapes can only be caught in production (e.g., a portion of our project is developed in production and has no test environment).
The answer to #2 is split between “yes” and “no”. We may say “yes” if the bug has escaped more than once, significantly impacts users, or when the extra effort is manageable. We may say “no” when a mechanism is in place to alert our team of the prod error; we can patch some of these escapes before they affect users, with less effort than required to catch them in non-prod environments.
The answer to #3 falls to Testers, Programmers, BAs, and sometimes both or all.
So…when bugs escape to production, does my team adjust? Sometimes.
Peace Of Mind Without Detailed Test Cases
0 comments Posted by Eric Jacobson at Monday, May 21, 2012In reference to my When Do We Need Detailed Test Cases? post, Roshni Prince asked:
“when we run multiple tests in our head… [without using detailed test cases] …how can we be really sure that we tested everything on the product by the end of the test cycle?”
Nice question, Roshni. I have two answers. The first takes your question literally.
- …We can’t. We’ll never test everything by the end of the test cycle. Heck, we’ll never test everything in an open-ended test cycle. But who cares? That’s not our goal.
- Now I’ll answer what I think you are really asking, which is “without detailed test cases, how can we be sure of our test coverage?”. We can’t be sure, but IMO, we can get close enough using one or more of the following approaches:
- Write “test ideas” (AKA test case fragments). These should be less than the size of a Tweet. These are faster than detailed test cases to write/read/execute and more flexible.
- Use Code Coverage software to visually analyze test coverage.
- Build a test matrix using Excel or another table.
- Use a mind map to write test ideas. Attach it to your specs for an artifact.
- Use a Session Based Test Management tool like Rapid Reporter to record test notes as you test.
- Use a natural method of documenting test coverage. By “natural” we mean, something that will not add extra administrative work. Regulatory compliance expert and tester, Griffin Jones, has used audio and/or video recordings of test sessions to pass rigorous audits. He burns these to DVD and has rock solid coverage information without the need for detailed test cases. Another approach is to use keystroke capture software.
- Finally, my favorite when circumstances allow; just remember! That’s right, just use your brain to remember what you tested. Brains rock! Brains are so underrated by our profession. This approach may help you shine when people are more interested in getting test results quickly and you only need to answer questions about what you tested in the immediate future…like today! IMO, the more you enjoy your work as a tester, the more you practice testing, the more you describe your tests to others, the better you’ll recall test coverage from your brain. And brains record way more than any detailed test cases could ever hope to.
In my Don’t Give Test Cases To N00bs post I tried to make the argument against writing test cases as a means to coaching new testers. At the risk of sounding like a test case hater, I would like to suggest three contexts that may benefit from detailed test cases.
These contexts do not include the case of a mandate (e.g., the stakeholder requires detailed test cases and you have no choice).
- Automated Check Design: Whether a sapient tester is designing an automated check for an automation engineer or an automation engineer is designing the automated check herself, detailed test cases may be a good idea. Writing detailed test cases will force tough decisions to be made prior to coding the check. Decisions like: How will I know if this check passes? How will I ensure this check’s dependent data exists? What state can I expect the product-under-test to be in before the check’s first action?
- Complex Business Process Flows: If your product-under-test supports multiple ways of accomplishing each step in its business process flows, you may want to spec out each test to keep track of test coverage. Example: Your product’s process to buy a new widget requires 3 steps. Each of the 3 steps has 10 options. Test1 may be: perform Step1 with Option4, perform Step2 with Option1, then perform Step3 with Option10.
- Bug Report Repro Steps: Give those programmers the exact foot prints to follow else they’ll reply, “works on my box”.
Those are the three contexts I write detailed test cases for. What about you?
In response to my What I Love About Kanban As A Tester #1 post, Anonymous stated:
“The whole purpose of documenting test cases…[is]…to be able to run [them] by testers who don’t have required knowledge of the functionality.”
Yeah, that’s what most of my prior test managers told me, too…
“if a new tester has to take over your testing responsibilities, they’ll need test cases”
I wouldn’t be surprised if a secret QA manager handbook went out to all QA managers, stating the above as the paramount purpose of test cases. It was only recently that I came to understand how wrong all those managers were.
Before I go on, let me clarify what I mean by “test cases”. When I say “test cases”, I’m talking about something with steps, like this:
- Drag ItemA from the catalog screen to the new order screen.
- Change the item quantity to “3” on the new order screen.
- Click the “Submit Order” button.
Here’s where I go on:
- When test cases sit around, they get stale. Everything changes…except your test cases. Giving these to n00bs is likely to result in false fails (and maybe even rejected bug reports).
- When test cases are blindly followed, we miss the house burning down right next to the house that just passed our inspection.
- When test cases are followed, we are only doing confirmatory testing. Even negative (AKA “unhappy”) paths are confirmatory testing. If that’s all we can do, we are one step closer to shutting down our careers as testers.
- Testing is waaaay more than following steps. To channel Bolton, a test is something that goes on in your brain. Testing is more than answering the question, “pass or fail?”. Testing is sometimes answering the question, “Is there a problem here?”.
- If our project mandates that testers follow test cases, for Pete’s sake, let the n00b’s write their own test cases. It may force them to learn the domain.
- Along with test cases comes administrative work. Perhaps time is better spent testing.
- If the goal is valuable testing from the n00b, wouldn’t that best be achieved by the lead tester coaching the n00b? And if that lead tester didn’t have to write test cases for a hypothetical n00b, wouldn’t that lead tester have more time to coach the hypothetical n00b, should she appear. Here’s a secret: she never will appear. You will have a stack of test cases that nobody cares about; not even your manager.
In my next post I’ll tell you when test cases might be a good idea.
A Test this Blog reader asked,
“Every few years we look at certifications for our testers. I'm not sure that the QA/testing ones carry the same weight as those for PMs or developers. Do you have any advice on this?”
I’ll start an answer by telling you my opinion and maybe some of my readers will respond to finish.
The only software testing certification I’ve tried to get was from IIST. Read my post, Boycott the International Institute for Software Testing, to understand why I gave up.
Ever since, I’ve been learning enough to stay engaged and passionate about software testing without certifications. I’ve been learning at my own pace, following my own needs and interests, by reading software testing blogs, books, thinking, and attending about one testing conference (e.g., CAST, STAR, STPCon) per year. My “uncertified” testing skills have been rewarded at work via promotions, and this year I will be speaking at my third test conference. This pace has been satisfying enough for me…sans certifications.
I tend to follow the testers associated with the Context Driven Testing school/approach. These testers have convinced me certifications are not the best way to become a skilled tester. Certifications tend to reward memorization rather than learning test skills you can use. The courses (I’m not sure if they are considered certifications) Context Driven Testers seem to rally around are the online Black Box Software Testing courses, Foundations, Bug Advocacy, and Test Design. I planned to enroll in the Foundations course this year but I have my first baby coming so I’ve wimped out on several ambitions, including that.
So, as a fellow Test Manager, I do not encourage certifications on my testers. Instead I encourage their growth other ways:
- This year we are holding a private Rapid Software Testing course for our testers.
- I encourage (and sometimes force) my testers to participate in a testers-teaching-test-skills in-house training thing we do every month. Testers are asked to figure out what they are good at, and share it with other testers for an hour.
- We have a small QA Library. We try to stock it with the latest testing books. I often hand said books to testers when the books are relevant to each tester’s challenges.
- I encourage extra reading, side projects, and all non-project test-related discussions.
- We encourage testers to attend conferences and share what they learned when they return.
- We attend lots of free webinars. Typically, we’re disappointed and we rip on the presenters, but we still leave the webinar with some new tidbit.
So maybe this will give you other ideas. Let’s see if we get some comments that are for or against any specific certifications.
You’re probably a good leader just to be asking and thinking about this in the first place. Thanks for the question.
Avoid Trivial Bugs, Report What Works
4 comments Posted by Eric Jacobson at Tuesday, January 24, 2012I’ve been testing this darn thing all morning and I haven't found a single bug, or even an issue. My manager probably thinks I’m not testing well enough. My other tester colleagues keep finding bugs in their projects. Maybe I’m not a very good tester. My next scrum report is going to be lame. This sucks, man.
Wrong! It probably doesn’t suck. Not finding bugs may be a good thing. Your team may be building stuff that works. And you get to be the lucky dude who delivers the good news.
If there is lots of stuff that works and no bugs, you have even more to report than testers who keep finding bugs. Testers who keep finding bugs are probably executing fewer tests than you so they know less about their products than you. Instead of figuring out what works, they are stuck investigating what doesn’t work. They’ll still need to figure out what works eventually, it’s just going to take them a while to get there. And that sucks.
My manager is probably looking at my low bug count metric, thinking I’m not doing anything. Logging bugs makes me feel like a bad ass. There must be something I can log…hmmm…I know, I’ll log a bug for this user message; it’s not really worded as well as it could be…it has been like that for the last four years.
No! No! No! That’s gaming the system. It’s not going to work. You’re going to get a reputation as a tester who logs trivial bugs. Your manager is only counting bugs because you’re not giving her anything else. She just wants to know what you’re doing. Help your manager. Show her where to find your test reports, session sheets, or test execution results. Invite her to your scrum meetings. Tell her how busy you’ve been knocking out tests and how bad ass your entire project team is.
Think about it.
Reporting what works may be better than reporting trivial bugs.
Eight Things You May Not Need To Test
3 comments Posted by Eric Jacobson at Friday, January 20, 2012This article will be published in a future addition of the Software Test Professionals Insider – community news. I didn’t get a chance to write my blog post this week so I thought I would cheat and publish it on my own blog first.
I will also be interviewed about it on Rich Hand’s live Blog Talk Radio Show on Tuesday, January 31st at 1PM eastern time.
My article is below. If it makes sense to you or bothers you, make sure you tune in to the radio show to ask questions…and leave a comment here, of course.
Don’t Test It
As testers, we ask ourselves lots of questions:
- What is the best test I can execute right now?
- What is my test approach going to be?
- Is that a bug?
- Am I done yet?
But how many of us ask questions like the following?
- Does this Feature need to ever be tested?
- Does it need to be tested by me?
- Who cares if it doesn’t work?
In my opinion, not enough of us ask questions like the three above. Maybe it’s because we’ve been taught to test everything. Some of us even have a process that requires every Feature to be stamped “Tested” by someone on the QA team. We treat testing like a routine factory procedure and sometimes we even take pride in saying...
“I am the tester. Therefore, everything must be tested...by me...even if a non-tester already tested it...even if I already know it will pass...even if a programmer needs to tell me how to test it...I must test it, no exceptions!”
This type of thinking may be giving testers a bad reputation. It emphasizes testing importance because of a thoughtless process rather than a service to provide the most valuable information to someone.
James Bach came up with the following test execution heuristic:
Basic Heuristic: “If it exists, I want to test it”
I disagree with that heuristic, as it is shown above and often published. However, I completely agree with the full version James published when he introduced it in his 7/8/2006 blog post:
“If it exists, I want to test it. (The only exception is if I have something more important to do.)”
The second sentence is huge! Why? Because often we do have something more important to do, and it’s usually another test! Unfortunately, importance is not always obvious. So rather than measuring importance, I like to ask the three questions above and look for things that may not be worth my time to test. Here are eight examples of what I’m talking about:
- Features that don’t go to production - My team has these every iteration. These are things like enhancements to error logging tables or audit reports to track production activity. On Agile teams these fall under the umbrella of Developer User Stories. The bits literally do not go to production and by their nature cannot directly affect users.
- Patches for critical production problems that can’t get worse - One afternoon our customers called tech support indicating they were on the verge of missing a critical deadline because our product had a blocking bug. We had one hour to deliver the fix to production. The programmer had the fix ready quickly and the risk of further breaking production was insignificant because production was currently useless. Want to be a hero? Don’t slow things down. Pass it through to production. Test it later if you need to.
- Cosmetic bug fixes with timely test setup - We fixed a spelling mistake that had shown up on a screen shot of a user error message. The user was unaware of the spelling mistake but we fixed it anyway; quick and easy. Triggering said error message required about 30 minutes of setup. Is it worth it?
- Straight forward configuration changes - Last year our product began encountering abnormally large production jobs it could not process. A programmer attempted to fix the problem with an obvious configuration change. There was no easy way to create a job large enough to cross the threshold in the QA environment. We made the configuration change in production and the users happily did the testing for us.
- Too technical for a non-programmer to test - Testing some functionality requires performing actions while using breakpoints in the code to reproduce race conditions. Sometimes a tester is no match for the tools and skills of a programmer with intimate knowledge of the product code. Discuss the tests but step aside.
- Non-tester on loan - If a non-tester on the team is willing to help test, or better yet, wants to help test a certain Feature, take advantage of it. Share test ideas and ask for test reports. If you’re satisfied, don’t test it.
- No repro steps - Occasionally a programmer will take a stab at something. There are often errors reported for which nobody can determine the reproduction steps. We may want to regression test the updated area, but we won’t prevent the apparent fix from deploying just because we don’t know if it works or not.
- Inadequate test data or hardware - Let’s face it. Most of us don’t have as many load balanced servers in our QA environment as we do in production. When a valid test requires production resources not available outside of production, we may not be able to test it.
Many of you are probably trying to imagine cases where the items above could result in problems if untested. I can do that too. Remember, these are items that may not be worth our time to test. Weigh them against what else you can do and ask your stakeholders when it’s not obvious.
If you do choose not to test something, it’s important not to mislead. Here is the approach we use on my team. During our Feature Reviews, we (testers) say, “we are not going to test this”. If someone disagrees, we change our mind and test it. If no one disagrees, we “rubber stamp” it. Which means we indicate nothing was tested (on the work item or story) and pass it through so it can proceed to production. The expression “rubber stamping” came from the familiar image of an administrative worker rubber stamping stacks of papers without really spending any time on each. The rubber stamp is valuable, however. It tells us something did not slip through the cracks. Instead, we used our brains and determined our energy was best used elsewhere.
So the next time you find yourself embarking on testing that feels much less important than other testing you could be doing, you may want to consider...not testing it. In time, your team will grow to respect your decision and benefit from fewer bottlenecks and increased test coverage where you can actually add value.
After seeing Mark Vasko’s CAST 2011 lightning talk, I was inspired to create a Test Idea Wall with one of my project teams. Much to my surprise, the damn thing actually works.
When I’m taking a break from testing something, I pause as I walk past the Test Idea Wall. My brain jumps around between the pictures and discovers gaps in my test coverage.
Our wall is incredibly simple, but so far it contains the main test idea triggers we forget. For example, the picture of the pad lock reminds us to consider locking scenarios, something that is often just an afterthought, but always gets us fruitful information:
- What if we run the same tests as a read-only user?
- What if we run the same tests while another user has our lock?
- What if we run the same tests while the system has our lock?
- What if certain users should not have this permission?
Thanks, Mark!
It’s Okay To Control Your Test Environment
0 comments Posted by Eric Jacobson at Friday, September 30, 2011Sometimes production bug scenarios, are difficult to recreate in a test environment.
One such bug was discovered on one of my projects:
If ItemA is added to a database table after SSIS Package1 executes but before SSIS Package2 executes, an error occurs. Said packages execute at random intervals frequently, to the point where a human cannot determine the exact time to add ItemA, if that human is trying to reproduce the bug. Are you with me?
So what is a tester to do?
The answer is, control your test environment. Disable the packages and manually execute them to run one time, when you want them to.
- Execute SSIS Package1 once.
- Add ItemA to the database table.
- Execute SSIS Package2 once.
A tester on my team argued, “But that’s not realistic.”. She’s right. But if we understand the bug as much as we think we do, we should be able to repeatedly experience the bug and its fix, using our controlled environment. And if we can’t, then we really don’t understand the bug.
This is what it’s all about. Be creative as a tester, simplify things, and control your environment.
The Answer Is “Yes, I can test it”
5 comments Posted by Eric Jacobson at Thursday, September 22, 2011Which of these scenarios will make you a rock star tester? Which will make your job more interesting? Which provides the most flexible way for your team to handle turbulence?
SCENARIO 1
Programmer: We need to refactor something this iteration. It was an oversight and we didn’t think we would have to.
Tester: Can’t this wait until next iteration? If it ain’t broke, don’t fix it.
BA: The users really can’t wait until next iteration for FeatureA. I would like to add FeatureA to the current iteration.
Tester: Okay, which feature would you like to swap it out for?
Programmer: I won’t finish coding this until the last day of the iteration.
Tester: Then we’ll have to move it to a future iteration, I’m not going to have time to test it.
SCENARIO 2
Programmer: We need to refactor something this iteration. It was an oversight and we didn’t think we would have to.
Tester: Yes, I can test it. I’ll need your help, though.
BA: The users really can’t wait until next iteration for FeatureA. I would like to add FeatureA to the current iteration.
Tester: Yes, I can test it. However, these are the only tests I’ll have time to do.
Programmer: I won’t finish coding this until the last day of the iteration.
Tester: Yes, I can test it…as long as we’re okay releasing it with these risks.
A fun and proud moment for me. Respected tester, Matt Heusser, interviewed me for his This-Week-In-Software-Testing podcast on Software Test Professionals. It was scary because there was no [Backspace] key to erase anything I wished I hadn’t said.
I talked a bit about the transition from tester to test manager, what inspires testers, and some other stuff. It was truly an honor for me.
The four most recent podcasts are available free, although you may have to register for a basic (free) account. However, I highly recommend buying the $100 membership to unlock all 49 (and counting) of these excellent podcasts. I complained at first but after hearing Matt’s interviews with James Bach, Jerry Weinberg, Cem Kaner, and all the other great tester/thinkers, it was money well spent. The production is top notch and listening to Matt’s testing ramblings on each episode is usually as interesting as the interview. There are no podcasts available like these anywhere.
Keep up the great work Matt and team! And keep the podcasts coming!