If you read part 1, you may be wondering how my automated check performed…

The programmer deployed the seeded bug and I’m happy to report, my automated check found it in 28 seconds! 

Afterwards, he seeded two additional bugs.  The automated check found those as well.  I had to temporarily modify the automated check code to ignore the first bug in order to find the second.  This is because the check stops checking as soon as it finds one problem.  I could tweak the code to collect problems and keep checking but I prefer the current design.

Here is the high level generic design of said check:

Build the golden masters:

  • Make scalable checks - Before test execution, build multiple golden masters per coverage ambition.  This is a one-time-only task (until the golden masters need to be updated per expected changes).
  • Bypass GUI when possible – Each of my golden masters consist of the response XML from a web service call, saved to a file.  Each XML response has over a half a million nodes, which are mapped to a complex GUI.  In my case, my automated check will bypass the GUI.  GUI automation could never have found the above seeded bug in 28 seconds.  My product-under-test takes about 1.5 minutes just to log in and navigate to the module being tested. Waiting for the GUI to refresh after the countless service calls being made in the automated check would have taken hours.
  • Golden masters must be golden! Use a known good source for the service call.  I used Production because my downstream environments are populated with data restored from production.  You could use a test environment as long as it was in a known good state.
  • Use static data - Build the golden masters using service request parameters that return a static response.  In other words, when I call said service in the future, I want the same data returned.  I used service request parameters to pull historical data because I expect it to be the same data next week, month, year, etc.
  • Automate golden master building - I wrote a utility method to build my golden masters.  This is basically re-used code from the test method, which builds the new objects to compare to the golden masters.

Do some testing:

  • Compare - This is the test method.  It calls the code-under-test using the same service request parameters used to build the golden masters.  The XML service response from the code-under-test is then compared to that of the archived golden masters, line-by-line.
  • Ignore expected changes - In my case there are some XML nodes the check ignores.  These are nodes with values I expect to differ.  For example, the CreatedDate node of the service response object will always be different from that of the golden master.
  • Report - If any non-ignored XML line is different, it’s probably a bug, fail the automated check, report the differences with line number and file (see below) references and investigate.
  • Write Files - For my goals, I have 11 different golden masters (to compare with 11 distinct service response objects).  The automated check loops through all 11 golden master scenarios, writing each service response XML to a file.  The automated check doesn’t use the files, they are there for me.  This gives me the option to manually compare suspect new files to golden masters with a diff tool, an effective way of investigating bugs and determining patterns.

I’m feeling quite cocky at the moment.  So cocky that I just asked my lead programmers to secretly insert a bug into the most complex area of the system under test.

Having just finished another epic automated check based on the Golden Master approach I discussed earlier, and seeing that most of the team is on pre-Thanksgiving vacation, this is the perfect time to “seed a bug”.  Theoretically, this new automated check should help put our largest source of regression bugs to rest and I am going to test it.

The programmer says he will hide the needle in the haystack by tomorrow.

I’m waiting…

You’ve got this new thing to test. 

You just read about a tester who used Selenium and he looked pretty cool in his bio picture.  Come on, you could do that.  You could write an automated check for this.  As you start coding, you realize your initial vision was too ambitious so you revise it.  Even with the revised design you’re running into problems with the test stack.  You may not be able to automate the initial checks you wanted, but you can automate this other thing.  That’s something.  Besides, this is fun.  The end is in sight.  It will be so satisfying to solve this.  You need some tasks with closure in your job, right?  This automated check has a clear output.  You’ve almost cracked this thing…cut another corner and it just might work.  Success!  The test passes!  You see green!  You rule!  You’re the Henry Ford of testing!  You should wear a cape to work!

Now that your automated thingamajig is working and bug free, you can finally get back to what you were going to test.  Now what was it?

I’m not hating on test automation.  I’m just reminding myself of its intoxicating trap.  Keep your eyes open.



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.