I noticed one of our development teams was creating new Jira Issues for each bug found during the development cycle.  IMO, this is an antipattern. 

These are the problems it can create, that I can think of:

  • New Jira Issues (bug reports) are creating unneccessry admin work for the whole team. 
    • We see these bug reports cluttering an Agile board.
    • They may have to get prioritized.
    • We have to track them, they have to get assigned, change statuses, get linked, maybe even estimated.
    • They take time to create. 
    • They may cause us to communicate via text rather than conversation.
  • Bug reports mislead lazy people into tracking progress, quality, or team performance by counting bugs.
  • It leads to confusion about how to manage the User Story.  If the User Story is done except for the open bug reports, can we mark the User Story “Done”?  Or do we need to keep the User Story open until the logged bugs get fixed…”Why is this User Story still in progress?  Oh yeah, it’s because of those linked logged bugs”.
  • It’s an indication our acceptance criteria is inadequete.  That is to say, if the acceptance criteria in the User Story is not met, we wouldn’t have to log a bug report.  We would merely NOT mark the Story “Done”.
  • Bug reports may give us an excuse not to fix all bugs…”let’s fix it next Sprint”, “let’s put it on the Product Backlog and fix it some other day”…which means never.
  • It’s probably a sign the team is breaking development into a coding phase and a testing phase.  Instead, we really want the testing and programming to take place in one phase...development. 
  • It probably means the programmer is considering their code “done”, throwing it over the wall to a tester, and moving on to a different Story.  This misleads us on progress.  Untested is as good as nothing.

If the bug is an escape, if it occurs in production.  It’s probably a good idea to log it.

On a production support kanban development team, a process dilema came up.  In the case where something needs to be tested by a tester:

  1. Should the tester perform the testing first in a development environment, then in a production-like environment after the thing-under-test has been packaged and deployed?  Note: in this case, the package/deploy process is handled semi-manually by two separate teams, so there is a delay.
  2. Or, should the tester perform all the testing in a production-like environment after the thing-under-test has been packaged and deployed?

Advantage of scenario 1 above:

  • Dev environment testing shortens the feedback loop.  This would be deep testing.  If problems surface they would be quicker and less risky to fix.  The post-package testing would be shallow testing, answering questions like: did the stuff I deep tested get deployed properly?

Advantage of scenario 2 above:

  • Knock out the testing in one environment.  The deep testing will indirectly cover the package/deployment testing.

From the surface, scenario 2 looks better because it only requires one testing chunk, NOT two chunks separated by a lengthy gap.  But what happens if a problem surfaces in scenario 2?  Now we must go through two lengthy gaps.  How about a third problem?  Three gaps.  And so on.

My conclusion: Scenario 1 is better unless this type of thing-under-test is easy and has a history of zero problems.



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.