I recently began Integration Testing between two AUTs. Each AUT has its own bug DB and dev team. Let’s say we execute a simple test like this...

Step 1: Trigger EventA in AUT1.

Expected Result: EventB occurs in AUT2.
Actual Result: EventB does not occur in AUT2.

We’ve got the repro steps. Can we log it as a bug? Not yet, because where do we log it? AUT1 or AUT2’s bug DB? In this type of testing, I think fewer bugs are logged per hour. I believe this is because the burden on the tester to perform a deep investigation is much higher than during Integration Testing between different modules of the same AUT.

Part of the problem may be a pride-thing between the dev teams. Each dev team believes the bug belongs to the other dev team until proven wrong by the tester. Yikes...not very healthy! This dev team pride-thing may exist because devs are not always fond of collaborating with other dev teams. On my floor, the QA teams are forced to work together while the dev teams somehow manage to survive in their own clans. There are exceptions, of course.

Anyway, after adequate bug research, it is usually possible to determine bug ownership. But what if neither dev team is at fault? What if the bug exists merely due to an integration design oversight? Where do you log it then?

Boy, being a software tester is hard work!

When one of my devs asked me to repro a bug on his box because he didn't know how to execute my repro steps, I didn't think twice. But I was a little surprised when he stated his position that he was strongly in favor of QA always performing the repro steps on the developer's box instead of the developer doing it themself. He argued the devs didn't always have time to go through time consuming or complex repro steps. He attempted to retract portions of his statement once he found out I was blogging it...too late.

I should, however, add that this particular dev has been tremendously helpful to me in setting up tests and helping me understand their results.

That being said, I've never heard anything like this from a developer in all my years of testing and I think it's ridiculous. But it did make me think about how we speak to our devs via bug reports. When we log repro steps from blackbox tests we use a kind of domain specific language that requires a user-based understanding of the AUT. To perform a repro step like "Build an alternate invoice", one must understand all the micro steps required to build the alternate invoice and one may have to understand what an alternate invoice is in the first place. If the next repro step is "Release the invoice to SystemX", one must know how to release the invoice, etc.

I think it is realistic to expect developers to have an understanding of this business-centric language as well as knowing how to perform said procedures from the AUT. And in general, time spent learning and using the AUT will help the developers improve it's overall quality.

Am I right?

That is the question. Or at least, hypothetically, it could be. As my current project nears its "Go-No-Go" (I really hate that phrase) date, my decisions on how I spend my dwindling time are becoming as critical as the bugs I still have to find.

My former manager, Alex, and I had an interesting disagreement today. If given a choice between verifying fixed bugs or searching for new ones, which would be more valuable to the project at the bitter end of its development life? Alex said verifying fixed bugs because you know the code around those bugs has been fiddled with and the chances of related bugs are significant. I would instead choose to spend this time searching for new bugs because I believe these unexecuted tests are more dangerous than those bugs said to have been fixed.

Well, it all boils down to a bunch of variables I guess...

  • How critical are these unexecuted tests or do we even know what they are?
  • What does history tell us about the percentage of our fixed bugs that get reopened after retest?
  • How critical are said fixed bugs?

The main reason we entered this discussion in the first place was because I am stubborn when it comes to fixed bug retesting (we call it "verifying bug fixes"). I find it dull and a waste of my skills. It seems more like busy work. The test steps are the repro steps and the outcome is typically boring. "Yay! It's fixed!" is less interesting than "Yay! I can't wait to log this!".

What do you think?

The first software test blogger I read was The Braidy Tester. He is still my favorite.

I borrowed from his test automation framework, took Michael Bolton's Rapid Software Testing course based on his suggestion, and laugh at his testing song satires. But most of all, The Braidy Tester (AKA "Micahel" or "Michael Hunter") inspired me to think more about testing and how to improve it.

So when he asked to interview me for his Book of Testing series I was thrilled. I realize this post is nothing more than an attempt to rub my own ego, but perhaps my answers to the questions will help you think about your own answers. Here it is...


Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.