“Added a validation in the service codes to make sure either "DSP" or "Already in GE" is true for Accession.”


Do your devs write bug resolution comments? If not, ask for them. All bug tracking systems have a spot for devs to write notes about what they did to fix the bug. It’s tough to get devs to write their resolution but it’s well worth the pain.

When you verify fixed bugs you should be asking yourself what regression testing is necessary; what did this bug fix break? If you don’t know how the dev fixed it, you don’t really know what could have broken as a result. Sometimes, devs are cool enough to provide tips of where other bugs may be lurking…


“This problem is bigger than the shoot value hiding. In fact, the entire preCat is not reloading after a save which is hiding other bugs (UI refresh and service layer issues.)”


Some of my devs are terrible at writing resolution comments. Sometimes I have no idea what they did to fix it. My emotions tell me to Reopen the bug. But after listening to my brain, and asking the dev what they did, I usually discover an unexpected fix that still solves the problem. Dev comments would have been nice.

You may also discover your devs share some of your angst when dealing with scatterbrain stakeholders, as evident in comments I’ve recently seen such as the following:


“UI now allows multiple preCats and DSP check box, even though earlier requirements conflict with both of these.”

“Yet another requirements change. The need dsp check box has always been there. It was expressed that it should only be visible when there IS and accession, but I guess we are changing that.”


Poor dev guy. I feel your pain. I miss the good old pre-agile days too.

One of my new AUTs has stakeholders who are obsessed with cosmetics. Despite having an AUT full of business processes gaps and showstopper bugs, during stakeholder meetings their first priority is to rattle off a list of all the cosmetic things they want changed. For example:

  • titles should left align
  • read-only fields should be borderless
  • certain fields should be bigger/smaller
  • less white space
  • no scroll bars
  • don’t like text color or font
  • buttons should be same width regardless of their names

Theoretically, Agile is supposed to address this kind of perpetual scope creep. But I hate it because even after listening to the stakeholders, it still becomes awkward for the dev to code and the tester to verify.

Something truly lacking in custom in-house (not shrink-wrap) apps is the ability for users to customize UIs until they’ve bored themselves to death. I’ve never been one to bother to “change skins” on my apps or even to change my desktop background. But cosmetics is a major concern for some users. Forcing me to test someone’s notion of what they think looks good on a UI is not interesting to me, as a tester. Let’s write software that lets users make their own cosmetic changes on their own time. I’ll test its engine. That sounds interesting.

JART found a bug this morning. But it wasn't in my AUT.

JART had been happily smoke testing our pre-production environment this morning for an hour. I was eagerly awaiting the results for a group of anxious managers. After seeing QTP’s auto-generated test results consistently for 144 previous test runs, QTP suddenly decided to give me this instead of the results:



I didn’t change any versions of any software on this box, of course. After waiting another hour while JART repeated all the tests, the next results file was fine. …Annoying.

We’ve been spinning our wheels investigating a prod bug that corrupted some data yesterday. Once we cracked it, we realized the bug had been found and fixed more than a year ago. …Depressing. My first thought? Why didn’t I catch this when it broke?

Perfecting regression testing is a seemingly impossible task. Some of you are thinking, “just use test automation...that's what they said in that Agile webinar I just attended”. If my team had the bandwidth to automate every test case and bug we conceived of, the automation stack would require an even larger team to maintain. And it would need its own dedicated test team to ensure it properly executed all tests.

It’s even more frustrating if you remove the option of automated regression testing. Each test cycle would need to increase by the same amount of time it took to test the new features in the last build, right? So if iteration 4 is a two week iteration, and I spend a week testing new features. That means, iteration 5 needs to be a three week iteration; I’ll need that extra week so I can run all the iteration 4 tests again. They’ll give me eight weeks to test iteration 10, right?

Wrong? You mean I have the same amount of test time each iteration, even though the amount of tests I have to execute are significantly increasing? This is a reality that somehow we all deal with.

Obviously, none of us have "perfect" regression testing. The goal is probably "good enough" but the notion of improving it is probably driving you crazy, as it is me. This topic is glossed over so much, I wonder how many testers have an effective strategy.

What is your regression test strategy?



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.