medium_4732287160

Have you ever been to a restaurant with a kitchen window?  Well, sometimes it may be best not to show the customers what the chicken looks like until it is served.

A tester on my team has something similar to a kitchen window for his automated checks; the results are available to the project team.

Here’s the rub: 

His new automated check scenario batches are likely to result in…say, a 10% failure rate (e.g., 17 failed checks).  These failures are typically bugs in the automated checks, not the product under test.  Note: this project only has one environment at this point.

When a good curious product owner looks through the kitchen window and sees 17 Failures, it can be scary!  Are these product bugs?  Are these temporary failures? 

Here’s how we solved this little problem:

  • Most of the time, we close the curtains.  The tester writes new automated checks in a sandbox, debugs them, then merges them to a public list.
  • When the curtains are open, we are careful to explain, “this chicken is not yet ready to eat”.  We added an “Ignore” attribute to the checks so they can be filtered from sight.
photo credit: JBrazito via photopin cc

1 comments:

  1. Marcin said...

    That's true Eric. Automated test scripts is kind of being which usually show their own bugs on the beginning and when they reach enough maturity instead of showing that tested object is not working they show something opposite.



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.