In today's retrospective, a developer complained that he deployed a bug fix to testers and didn't hear any feedback until 5 days later, at which point the bug was reopened.

I'm embarrassed by the above occurrence because I'm a firm believer in providing feedback on new bits as quickly as possible. Let's say you have 5 equally complex features (user stories, whatever) to test by the end of the week. All 5 are ready for testing. One approach (Approach #1) would be to spend about a day on each feature.
If you manage to find bugs in some of these features, it's possible that we would not have enough time to get it fixed and retested. The problem gets worse if these are blocking bugs.
Approach #2 works under the assumption that blocking bugs will usually get discovered early and easily by executing your first tests (e.g., your happy path tests). If you do high level testing of all 5 features on day one, you can report the bugs sooner.
While said bugs are being fixed, you can dig deeper in the other areas. If it ain't broke, you're not trying hard enough, right?
Maybe by day 3 the blocking bugs are fixed and you can interrogate those areas again. And perhaps you can follow your tester skills for determining how to spend your remaining time.
Think about how often you've cracked open some new dev bits that have been sitting there waiting for days, only to find they blow up during your very first test. Some flavor of Approach #2 will help.

Thoughts? Arguments?


4 comments:

  1. Eusebiu Blindu said...

    Well this can happen when too often "regression runs" are executed and no priority is given. When you arrived after that to test the fixed issue, maybe you are not in the mood of properly verify it anyway :)

  2. Jesper L. Ottosen said...

    In a highly integrated enterprise test environment with 30+ groups and app's testing as much as possible is really the only way. Otherwise single test cases and bugs stops even the slightest thing.
    Predictability with regards to what specific day each test case gets tested will be lost, but predictability in progress can be monitored in other ways.

  3. Catherine Powell said...

    I think it's a great idea in theory. In practice, I tend to have some difficulty with two things:
    1. context switches
    2. setup/configuration

    The context switching is self-evident. I find it easier to, for example, spend two hours on activation than to spend an hour on activation and an hour on update. I definitely lose some time to changing my mindset.

    Setup and configuration varies based on the system I'm working with. With some systems, though, it's not trivial to set up for a new kind of test. For example, I can sit down to test the installer, and in doing so I've wiped away whatever was there before. If I want to then go test, say, activation, I have to do a good install, configure the activation server, and then I can start testing activation. That's not awful, but it's 30 minutes of work, and that adds up.

    I think you make an important point about finding things as early as possible, but for me it works best if you can split it across people and across systems so that each person still gets as much focus time and as little setup/prep time as possible.

  4. gMasnica said...

    I have run into the initial problem more times than I care to admit. My studio develops on a rapid clip, so sometimes feedback comes late. I will take your suggestions and use them when scheduling to see how that might help :)



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.