• Measuring your Automation might be easy.  Using those measurements is not.  Examples:
    • # of times a test ran
    • how long tests take to run
    • how much human effort was involved to execute and analyze results
    • how much human effort was involved to automate the test
    • number of automated tests
  • EMTE (Equivalent Manual Test Effort) – What effort it would have taken humans to manually execute the same test being executed by a machine.  Example: If it would take a human 2 hours, the EMTE is 2 hours.
    • How can this measure be useful? It is an easy way to show management the benefits of automation (in a way managers can easily understand).
    • How can this measure be abused?  If we inflate EMTE by re-running automated tests just for the sake of increasing EMTE, when are misleading.  Sure, we can run our automated tests everyday, but unless the build is changing every day, we are not adding much value.
    • How else can this measure be abused?  If you hide the fact that humans are capable of noticing and capturing much more than machines.
    • How else can this measure be abused?  If your automated tests can not be executed by humans and if your human tests can not be executed by a machine.
  • ROI (Return On Investment) – Dorothy asked the students what ROI they had achieved with the automation they created.  All 6 students who answered, got it wrong; they explained various benefits of their automation, but none were expressed as ROI.  ROI should be a number, hopefully a positive number. 
    • ROI=(benefit-cost)/cost
    • The trick is to convert tester time effort to money.
    • ROI does not measure things like “faster execution”, “quicker time to market”, “test coverage”
    • How can this measure be useful?  Managers may think there is no benefit to automation until you tell them there is.  ROI may be the only measure they want to hear.
    • How is this measure not useful?  ROI may not be important.  It may not measure your success.  “Automation is an enabler for success, not a cost reduction tool” – Yoram Mizrachi.  You company probably hires lawyers without calculating their ROI.
  • She did the usual tour of poor-to-better automation approaches (e.g., capture playback to advanced key-word driven framework).  I’m bored by this so I have a gap in my notes.
  • Testware architecture – consider separating your automation code from your tool, so you are not tied to the tool.
  • Use pre and post processing to automate test setup, not just the tests.  Everything should be automated except selecting which tests to run and analyzing the results.
  • If you expect a test to fail, use the execution status “Expected Fail”, not “Fail”.
  • Comparisons (i.e., asserts, verifications) can be “specific” or “sensitive”.
    • Specific Comparison – an automated test only checks one thing.
    • Sensitive Comparison – an automated test checks several things.
    • I wrote “awesome” in my notes next to this: If your sensitive comparisons overlap, 4 tests might fail instead of 3 passing and 1 failing.  IMO, this is one of the most interesting decisions an automator must make.  I think it really separates the amateurs from the experts.  Nicely explained, Dorothy!

1 comments:

  1. Mark Tse said...

    Thanks for blogging this =] - I didn't get to attend the tutorial, but was interested.

    Did Dorothy go into test result interpretation and handling false positives/negatives? Also, was there any discussion on team involvement in test automation?



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.