We’ve been spinning our wheels investigating a prod bug that corrupted some data yesterday. Once we cracked it, we realized the bug had been found and fixed more than a year ago. …Depressing. My first thought? Why didn’t I catch this when it broke?

Perfecting regression testing is a seemingly impossible task. Some of you are thinking, “just use test automation...that's what they said in that Agile webinar I just attended”. If my team had the bandwidth to automate every test case and bug we conceived of, the automation stack would require an even larger team to maintain. And it would need its own dedicated test team to ensure it properly executed all tests.

It’s even more frustrating if you remove the option of automated regression testing. Each test cycle would need to increase by the same amount of time it took to test the new features in the last build, right? So if iteration 4 is a two week iteration, and I spend a week testing new features. That means, iteration 5 needs to be a three week iteration; I’ll need that extra week so I can run all the iteration 4 tests again. They’ll give me eight weeks to test iteration 10, right?

Wrong? You mean I have the same amount of test time each iteration, even though the amount of tests I have to execute are significantly increasing? This is a reality that somehow we all deal with.

Obviously, none of us have "perfect" regression testing. The goal is probably "good enough" but the notion of improving it is probably driving you crazy, as it is me. This topic is glossed over so much, I wonder how many testers have an effective strategy.

What is your regression test strategy?

6 comments:

  1. Alex said...

    No, the testers can't do it all themselves. Testing has to be a team responsibility. That means testers need to partner with developers to get the tests written.
    Otherwise, you either need a bunch more testers, or you just try to keep up as best you can.

  2. Marlena said...

    I am the only tester in my group, and I'm automating as much regression as possible. I asked my boss which tests he wanted to see automated first and added to that the tests I feel are particularly important or just plain annoying to run.

    You are right, I will never have everything automated, but I will be testing more than if I didn't automate.

    Building a framework that lets me add and run tests quickly has helped. Does require it's own maintenance, but OO design and lots of composition helps with this.

  3. Simon Morley said...

    One way of doing it is a so-called light regression test (LRT) - this can be quantified as the most valuable/important tests that should pass per iteration. This sub-set can be derived by a combination of risk-based techniques, coverage and even time-execution based.

    The key is the selection should not be static - ie feedback and periodic review. In addition, it should be supplemented with a strategy to cover all (or x%) of your test base over y cycles/iterations. This way you get the minimum regression subset (best bang for the buck) and the back-up of a cycling through the test base.

    Depending on customer release schedules a run-through of a larger/wider/whole selection of the regression test base can be made for the "release" build.

    I'm currently looking at alternatives for constructing algorithms for this feedback process - ie modifying the LRT based on the last results, bug reports/FST and ongoing development...

  4. Michael said...

    As you suggest, complete regression testing is impossible. Pop open Task Manager, and have a look at the processes running on your machine. It'll never again be the same as it is now. So even if it were necessary, we couldn't achieve it.

    Part of my take on the issue is here. Another part is that testing is always a sampling exercise. To cover the program well, our samples should be broad, biased towards risk, recent changes, and real-world operation.

    A lot of the testing literature concentrates on focusing heuristics. That can be valuable, but we also have to mix things up with some tests that aren't focused on a particular use case or a particular requirement or a particular risk. That's how we find out about new problems, even as we're aware of old ones.

    One approach to that is to use the product, behaving as real, variable, people with work to do. I have strong reason to believe that far too few testers do that. How do I justify the belief? Don't we all have the experience of using products that display prominent bugs the first time we use them ourselves?

    ---Michael B.

  5. Eric Jacobson said...

    Marlena,

    Here is what I wonder...

    Your model is great now. But doesn't it seem like the more automated tests you have, the more maintenance it will take each iteration (no matter how efficient your framework)? Eventually you'll either have to stop using certain tests or stop writing as many new ones...right?

  6. sakthi said...

    I will never have everything automated, but I will be testing more than if I didn't automate. Testing has to be a team responsibility. That means testers need to partner with developers to get the tests written. This is sakthi from www.macrotesting.com I really like the way you have posted this topic its really a nice Article. Thank you....

    cheers
    sakthi



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.