I love all the insightful responses to my To Bug Or Not To Bug post. Contrary to the voting results, the comments indicated most testers would log the bug.

“When in doubt, log it”
“Always error on the side of logging it”
“Log everything”
“The tester needs to log the bug so you have a record of the issue at the very least”

These comments almost convinced me to change my own opinion. If I were the tester, I would not have logged said bug. I have a model in my head of the type of user that uses my AUTs. My model user knows the difference between a double-click and a triple-click. And if they get it wrong, they are humbled enough not to blame the software.

But the specifics on this bug are not my main thought here.

Within the last 6 months, I’ve started to disagree with some of the above comments; comments I used to make, myself. As testers, it’s not up to us to decide which bugs to fix. I agree. But since we have the power to log any bug we choose, we need to make sure we don't abuse this power.

  • Bugs create overhead. They have to be logged properly with repro steps, read and understood in triage meetings, tracked, and assigned destinations. Bugs linger in developer bug queues, sometimes with negative connotations. All these things nickel and dime the team’s available work time.
  • Your reputation as a tester is partly determined by the kinds of bugs you log.
That being said, I’m giving this tester the benefit of the doubt. She has a very good track record of predicting user behavior. Her actual decision was to log the bug and offer to reject it if users don’t encounter the issue during UAT. Not a bad compromise, I guess. Another approach would have been to not log the bug unless the users notice it. Which is less work? The former has the advantage of a little CYA for the tester...which is unfortunately what we desire sometimes.

Yesterday I watched an interesting discussion between a really good tester and a really good developer. I don’t want to spoil the fun by adding my opinion…yet. So what do you think? Should the tester log a new bug? Please use the voting control below this post.

User Story: As a scheduler user, I want to edit a field with the fewest steps possible so I can quickly perform frequent editing tasks on the schedule.

Implementation: if users double-click on a target they get a window with an editable field and a flashing cursor in the field.

Tester:
I was testing this, and it seems to work if you do a distinct double click. It’s really easy to triple click, though, and then the cursor isn’t in the field even though [the window] is in edit mode. My feeling is that the users will see this as ‘it works sometimes and not others’. Is there any way to block that third click from happening if we get a double click from the users in that field?

Dev:
Not really, you would have to intercept windows events, and in that case you’re just masking and promoting users to continue to practice bad habits. The [problem] in this case would be especially bad, because they would double click in the field, and it wouldn’t even enter into edit mode. If they accidentally triple click, they can just click in the field and continue, but at least the control would be in edit mode.

Tester:
I just have a feeling we’re going to have complaints on it. I hadn’t actually realized I’d triple clicked several times, it just kept popping up in edit mode, sometimes with a cursor in the field and sometimes without. I’d thought it was only partially fixed until I realized that’s what I was doing.

Dev:
I see what you’re saying and I guess it’s fine to log a bug, but what is the threshold, is a triple click based on your average speed of clicking, a slow user’s speed of clicking, should I wait 100 milliseconds, 300? It would change from user to user. Windows clearly defines a double click event based on system settings that a user can change based on their system and their own speed of clicking. If we start inventing a triple click behavior, then we take over functions designed to be handled by the operating system that could easily introduce many other bugs. Detecting such an event requires a lot of thought and code, and would at best be buggy and worse introduce even more inconsistent behavior. Just my opinion on it though.

Should the tester log the bug?


I had the pleasure of eating lunch with Dorothy Graham at Stareast. Dorothy is the coauthor of “Software Test Automation”, which has been a well respected book on the subject for the last 10 years. A colleague recently referred me to a great article, “That’s No Reason to Automate!”, coauthored by Dorothy in the current issue of Better Software.

In the article, Dorothy debunks many popular objectives used for test automation and suggests more reasonable versions of each. This article is helping me wrap my brain around my own test automation objectives (I just hired a test automator) but it was also just great to hear a recognized test automation expert empower manual testers so much.

I'll paraphrase/quote some sentences that caught my attention and some of the (needs improvement) test automation objectives they contradict.

Objective: Automation should find more bugs.

  • “Good testing is not found in the number of tests run, but in the value of the tests that are run.”
  • The factor that determines if more bugs will be found is the quality of the tests, not the quantity. Per Dorothy, “It is the testing that finds bugs – not the automation”. The trick is to free up the tester’s time so they can find more bugs. This may be achieved by using automation to execute the mundane tests (that probably won’t find more bugs).

Objective: Automation should reduce testing staff.
  • More staff are typically need to incorporate test automation. People with test script development skills will need to be added, in addition to people with testing skills.
  • Automation supports testing activities but does not replace them. Test tools cannot make intelligent decisions about which tests to run and when, nor can they analyze results and investigate problems.

Objective: Automation should reduce testing time.
  • “The main thing that causes increased testing time is the quality of the software – the number of bugs that are already there…the quality of the software is the responsibility of the developers, not the testers or the test automators”

Objective: Automation should allow us to run more tests to get more coverage.
  • A count of the number of automated tests is a useless way of gauging the contribution of automation to testing. If the test team ends up with a set of tests that are hardly ever run by the testers, that is not the fault of the test automators. That is the fault of the testers for choosing the wrong tests to automate.

Objective: We should automate X% of our tests.
  • Automating 2% of your most important tests could be better than automating 50% of your tests that don’t provide value.

Note: The article and book were co-authored by both Dorothy Graham and Mark Fewster. Although I did not have lunch with Mark, I'm sure he is a great guy too!

In today's retrospective, a developer complained that he deployed a bug fix to testers and didn't hear any feedback until 5 days later, at which point the bug was reopened.

I'm embarrassed by the above occurrence because I'm a firm believer in providing feedback on new bits as quickly as possible. Let's say you have 5 equally complex features (user stories, whatever) to test by the end of the week. All 5 are ready for testing. One approach (Approach #1) would be to spend about a day on each feature.
If you manage to find bugs in some of these features, it's possible that we would not have enough time to get it fixed and retested. The problem gets worse if these are blocking bugs.
Approach #2 works under the assumption that blocking bugs will usually get discovered early and easily by executing your first tests (e.g., your happy path tests). If you do high level testing of all 5 features on day one, you can report the bugs sooner.
While said bugs are being fixed, you can dig deeper in the other areas. If it ain't broke, you're not trying hard enough, right?
Maybe by day 3 the blocking bugs are fixed and you can interrogate those areas again. And perhaps you can follow your tester skills for determining how to spend your remaining time.
Think about how often you've cracked open some new dev bits that have been sitting there waiting for days, only to find they blow up during your very first test. Some flavor of Approach #2 will help.

Thoughts? Arguments?




Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.