Well…yes. I would.
The most prolific bug finder on my team is struggling with this question. The less the team decides to fix her bugs, the less interested she grows in reporting them. Can you relate?
There is little satisfaction reporting bugs that nobody wants to hear about or fix. In fact, it can be quite frustrating. Nevertheless, when our stakeholders choose not to fix certain classes of bugs, they are sending us a message about what is important to them right now. And as my friend and mentor, Michael Bolton likes to say:
If they decide not to fix my bug, it means one of two things:
- Either I’m not explaining the bug well enough for them to understand its impact,
- or it’s not important enough for them to fix.
So as long as you’re practicing good bug advocacy, it must be the second bullet above. And IMO, the customer is always right.
Nevertheless, we are testers. It is our job to report bugs despite adversity. If we report 10 for every 1 that gets fixed, so be it. We should not take this personally. However, we may want to:
- Adjust our testing as we learn more about what our stakeholders really care about.
- Determine a non-traditional method of informing our team/stakeholders our bugs.
- Individual bug reports are expense because they slowly suck everyone’s time as they flow through or sit in the bug repository. We wouldn’t want to knowingly start filling our bug report repository with bugs that won’t be fixed.
- One approach would be a verbal debrief with the team/stakeholders after testing sessions. Your testing notes should have enough information to explain the bugs.
- Another approach could be a “supper bug report”; one bug report that lists several bugs. Any deemed important can get fixed or spun off into separate bug reports if you like.
It’s a cliché, I know. But it really gave me pause when I heard Jeff “Cheezy” Morgan say it during his excellent STAReast track session, “Android Mobile Testing: Right Before Your Eyes”. He said something like, “instead of looking for bugs, why not focus on preventing them?”.
Cheezy demonstrated Acceptance Test Driven Development (ATDD) by giving a live demo, writing Ruby tests via Cucumber, for product code that didn’t exist. The tests failed until David Shah, Cheezy’s programmer, wrote the product code to make them pass.
(Actually, the tests never passed, which they later blamed on incompatible Ruby versions…ouch. But I’ll give these two guys the benefit of the doubt. )
Now back to my blog post title. I find this mindshift appealing for several reasons, some of which Cheezy pointed out and some of which he did not:
- Per Cheezy’s rough estimate 8/10 bugs involve the UI. There is tremendous benefit to the programmer knowing about these UI bugs while the programmer is writing the UI initially. Thus, why not have our testers begin performing exploratory testing before the Story is code complete?
- Programmers are often incentivized to get something code completed so the testers can have it (and so the programmers can work on the next thing). What if we could convince programmers it’s not code complete until it’s tested?
- Maybe the best time to review a Story is when the team is actually about to start working on it; not at the beginning of a Sprint. And what do we mean when we say the team is actually about to start working on it?
- First we (Tester, Programmer, Business Analyst) write a bunch of acceptance tests.
- Then, we start writing code as we start executing those tests.
- Yes, this is ATDD, but I don’t think automation is as important as the consultants say. More on that in a future post.
- Logging bugs is soooooo time consuming and can lead to dysfunction. The bug reports have to be managed and routed appropriately. People can’t help but count them and use them as measurements for something…success or failure. If we are doing bug prevention, we never need to create bug reports.
Okay, I’m starting to bore myself, so I’ll stop. Next time I want to explore Manual ATDD.
- Measuring your Automation might be easy. Using those measurements is not. Examples:
- # of times a test ran
- how long tests take to run
- how much human effort was involved to execute and analyze results
- how much human effort was involved to automate the test
- number of automated tests
- EMTE (Equivalent Manual Test Effort) – What effort it would have taken humans to manually execute the same test being executed by a machine. Example: If it would take a human 2 hours, the EMTE is 2 hours.
- How can this measure be useful? It is an easy way to show management the benefits of automation (in a way managers can easily understand).
- How can this measure be abused? If we inflate EMTE by re-running automated tests just for the sake of increasing EMTE, when are misleading. Sure, we can run our automated tests everyday, but unless the build is changing every day, we are not adding much value.
- How else can this measure be abused? If you hide the fact that humans are capable of noticing and capturing much more than machines.
- How else can this measure be abused? If your automated tests can not be executed by humans and if your human tests can not be executed by a machine.
- ROI (Return On Investment) – Dorothy asked the students what ROI they had achieved with the automation they created. All 6 students who answered, got it wrong; they explained various benefits of their automation, but none were expressed as ROI. ROI should be a number, hopefully a positive number.
- The trick is to convert tester time effort to money.
- ROI does not measure things like “faster execution”, “quicker time to market”, “test coverage”
- How can this measure be useful? Managers may think there is no benefit to automation until you tell them there is. ROI may be the only measure they want to hear.
- How is this measure not useful? ROI may not be important. It may not measure your success. “Automation is an enabler for success, not a cost reduction tool” – Yoram Mizrachi. You company probably hires lawyers without calculating their ROI.
- She did the usual tour of poor-to-better automation approaches (e.g., capture playback to advanced key-word driven framework). I’m bored by this so I have a gap in my notes.
- Testware architecture – consider separating your automation code from your tool, so you are not tied to the tool.
- Use pre and post processing to automate test setup, not just the tests. Everything should be automated except selecting which tests to run and analyzing the results.
- If you expect a test to fail, use the execution status “Expected Fail”, not “Fail”.
- Comparisons (i.e., asserts, verifications) can be “specific” or “sensitive”.
- Specific Comparison – an automated test only checks one thing.
- Sensitive Comparison – an automated test checks several things.
- I wrote “awesome” in my notes next to this: If your sensitive comparisons overlap, 4 tests might fail instead of 3 passing and 1 failing. IMO, this is one of the most interesting decisions an automator must make. I think it really separates the amateurs from the experts. Nicely explained, Dorothy!
If you want to have test automation
And don't care about trials and tribulation
Just believe all the hype
Get a tool of each type
But be warned, you'll have serious frustration!
(a limerick by Dorothy Graham)
I attended Dorothy Graham’s STARCanada tutorial, “Managing Successful Test Automation”. Here are some highlights from my notes:
- “Test execution automation” was the tutorial concern. I like this clarification; sets it apart from “exploratory test automation” or “computer assisted exploratory testing”).
- Only 19% of people using automation tools (In Australia) are getting “good benefits”…yikes.
- Testing and Automating should be two different tasks, performed by different people.
- A common problem with testers who try to be automators: Should I automate or just manually test? Deadline pressures make people push automation into the future.
- Automators – People with programming skills responsible for automating tests. The automated tests should be able to be executed by non-technical people.
- Testers – People responsible for writing tests, deciding which tests to automate, and executing automated tests. “Some testers would rather break things than make things”.
- Dorothy mentioned “checking” but did not use the term herself during the tutorial.
- Automation should be like a butler for the testers. It should take care of the tedious and monotonous, so the testers can do what they do best.
- A “pilot” is a great way to get started with automation.
- Calling something a “pilot” forces reflection.
- Set easily achievable automation goals and reflect after 3 months. If goals were not met, try again with easier goals.
- Bad Test Automation Objects– And Why:
- Reduce the number of bugs found by users – Exploratory testing is much more effective at finding bugs.
- Run tests faster – Automation will probably run tests slower if you include the time it takes to write, maintain, and interpret the results. The only testing activity automation might speed up is “test execution”.
- Improve our testing – The testing needs to be improved before automation even begins. If not, you will have poor automation. If you want to improve your testing, try just looking at your testing.
- Reduce the cost and time for test design – Automation will increase it.
- Run regression tests overnight and on weekends – If your automated tests suck, this goal will do you no good. You will learn very little about your product overnight and on weekends.
- Automate all tests – Why not just automated the ones you want to automate?
- Find bugs quicker – It’s not the automation that finds the bugs, it’s the tests. Tests do not have to be automated, they can also be run manually.
- The thing I really like about Dorothy’s examples above, is that she helps us separate the testing activity from the automation activity. It helps us avoid common mistakes, such as forgetting to focus on the tests first.
- Good Test Automation Objectives:
- Free testers from repetitive test execution to spend more time on test design and exploratory testing – Yes! Say no more!
- Provide better repeatability of regression tests – Machines are good checkers. These checks may tell you if something unexpected has changed.
- Provide test coverage for tests not feasible for humans to execute – Without automation, we couldn’t get this information.
- Build an automation framework that is easy to maintain and easy to add new tests to.
- Run the most useful tests, using under-used computer resources, when possible – This is a better objective than running tests on weekends.
- Automate the most useful and valuable tests, as identified by the testers – much better than “automated all tests”.
Last week, at STARCanada, I met several enthusiastic testers who might make great testing conference speakers. We need you. Life is too short for crappy conference talks.
I’m no pro by any means. But I have been a track speaker at STARWest, STARCanada, STPCon, and will be speaking at STAREast in 2 weeks.
Ready to give it a go? Here is my advice on procuring your first speaking slot:
- Get some public speaking experience. They are probably not going to pick you without speaking experience. If you need experience, try speaking to a group of testers at your own company, at an IT group that meets within your city, volunteer for an emerging topic talk or sign up for a lightning talk at a conference that offers those, like CAST.
- Come up with a killer topic. See what speakers are currently talking about and talk about something fresh. Make sure your topic can appeal to a wider audience. Experience reports seem appealing.
- Referrals – meet some speakers or industry leaders with some clout and ask them to review your talk. If they like it, maybe they would consider putting in a good word for you.
- Pick one or more conferences and search for their speaker submission deadlines and forms (e.g., Speaking At SQE Conferences). If you’ve attended conferences, you are probably already on their mailing list and may be receiving said requests. I’m guessing the 2014 SQE conference speaker submission will open in a few months.
- Submit the speaker submission form. Make sure you have an interesting sounding title. You’ll be asked for a summary of your talk including take-aways and maybe how you intend to give it. This is a good place to offer something creative about the way you will deliver your topic (e.g., you made a short video, you will do a hands-on group exercise).
- Wait. Eventually you’ll receive a call or email. Sound competent. Know your topic and be prepared to answer tough questions about it.
- If you get rejected. Politely ask what you could do differently to have a better chance of getting picked in the future.
It is not easy to get picked. I was rejected several times and eventually got a nice referral from Lynn McKee, an experienced speaker with a great reputation; that helped. One of my friends and colleagues, who is far more capable than I am, IMO, has yet to get picked up as a speaker. So I don’t know what secret sauce they are looking for.
BTW - Speaking at conferences has both advantages and disadvantages to consider.
- The opportunity to build your reputation as an expert of sorts in the testing community.
- It helps you refine your ideas and possibly spread knowledge.
- Free registration fees. This makes it more likely your company will pay your hotel/travel costs and let you attend.
- Public speaking is scary as hell for most of us. The weeks leading up to a conference can be stressful.
- Putting together good talks and practicing takes lots of time. I took days off work to prepare.
Don’t you just hate it when your Business Analysts (or others) beat you to it and point out bugs before you have a chance to?
It feels so unfair! They can send an email that says, “the columns aren’t in the right order, please fix it” and the programmers snap to attention like good little soldiers. Whereas, you saw the same problem but you are investigating further and confirming your findings with multiple oracles.
Well, this is not a bug race. There is no “my bug”. If someone else on your team is reporting problems, this helps you. And it certainly helps the team. You may want to observe the types of things these non-testers report and adjust your testing to target other testing.
But try to convert your frustration to admiration. Tell them “nice catch” and “thanks for the help”. Encourage more of the same.
I’m not asking if they *can* run unattended. I’m asking if they do run unattended…consistently…without ever failing to start, hanging, or requiring any human intervention whatsoever…EVER.
Automators, be careful. If you tell too many stories about unattended check-suite runs, the non-automators just might start believing you. And guess what will happen if they start running your checks? You know that sound when Pac-Man dies, that’s what they’ll think of your automated checks.
I remember hearing a QA Director attempt to encourage “test automation” by telling fantastical stories of his tester past:
“We used to kick off our automated tests at 2PM and then go home for the day. The next day, we would just look at the execution results and be done.”
Years later, I’ve learned to be cynical about said stories. In fact, I have yet to see an automated test suite (including my own) that consistently runs without ever requiring the slightest intervention from humans, who unknowingly may:
- Prep the test environment “just right” before clicking “Run”.
- Restart the suite when it hangs and hope the anomaly goes away.
- Re-run the failed checks because they normally pass on the next attempt.
- Realize the suite works better when kicked off in smaller chunks.
- Recognize that sweet spot, between server maintenance windows, where the checks have a history of happily running without hardware interruptions.