• Measuring your Automation might be easy.  Using those measurements is not.  Examples:
    • # of times a test ran
    • how long tests take to run
    • how much human effort was involved to execute and analyze results
    • how much human effort was involved to automate the test
    • number of automated tests
  • EMTE (Equivalent Manual Test Effort) – What effort it would have taken humans to manually execute the same test being executed by a machine.  Example: If it would take a human 2 hours, the EMTE is 2 hours.
    • How can this measure be useful? It is an easy way to show management the benefits of automation (in a way managers can easily understand).
    • How can this measure be abused?  If we inflate EMTE by re-running automated tests just for the sake of increasing EMTE, when are misleading.  Sure, we can run our automated tests everyday, but unless the build is changing every day, we are not adding much value.
    • How else can this measure be abused?  If you hide the fact that humans are capable of noticing and capturing much more than machines.
    • How else can this measure be abused?  If your automated tests can not be executed by humans and if your human tests can not be executed by a machine.
  • ROI (Return On Investment) – Dorothy asked the students what ROI they had achieved with the automation they created.  All 6 students who answered, got it wrong; they explained various benefits of their automation, but none were expressed as ROI.  ROI should be a number, hopefully a positive number. 
    • ROI=(benefit-cost)/cost
    • The trick is to convert tester time effort to money.
    • ROI does not measure things like “faster execution”, “quicker time to market”, “test coverage”
    • How can this measure be useful?  Managers may think there is no benefit to automation until you tell them there is.  ROI may be the only measure they want to hear.
    • How is this measure not useful?  ROI may not be important.  It may not measure your success.  “Automation is an enabler for success, not a cost reduction tool” – Yoram Mizrachi.  You company probably hires lawyers without calculating their ROI.
  • She did the usual tour of poor-to-better automation approaches (e.g., capture playback to advanced key-word driven framework).  I’m bored by this so I have a gap in my notes.
  • Testware architecture – consider separating your automation code from your tool, so you are not tied to the tool.
  • Use pre and post processing to automate test setup, not just the tests.  Everything should be automated except selecting which tests to run and analyzing the results.
  • If you expect a test to fail, use the execution status “Expected Fail”, not “Fail”.
  • Comparisons (i.e., asserts, verifications) can be “specific” or “sensitive”.
    • Specific Comparison – an automated test only checks one thing.
    • Sensitive Comparison – an automated test checks several things.
    • I wrote “awesome” in my notes next to this: If your sensitive comparisons overlap, 4 tests might fail instead of 3 passing and 1 failing.  IMO, this is one of the most interesting decisions an automator must make.  I think it really separates the amateurs from the experts.  Nicely explained, Dorothy!

If you want to have test automation
And don't care about trials and tribulation
Just believe all the hype
Get a tool of each type
But be warned, you'll have serious frustration!

(a limerick by Dorothy Graham)

I attended Dorothy Graham’s STARCanada tutorial, “Managing Successful Test Automation”.  Here are some highlights from my notes:

  • “Test execution automation” was the tutorial concern. I like this clarification; sets it apart from “exploratory test automation” or “computer assisted exploratory testing”).
  • Only 19% of people using automation tools (In Australia) are getting “good benefits”…yikes.
  • Testing and Automating should be two different tasks, performed by different people.
    • A common problem with testers who try to be automators:  Should I automate or just manually test?  Deadline pressures make people push automation into the future.
    • Automators – People with programming skills responsible for automating tests.  The automated tests should be able to be executed by non-technical people.
    • Testers – People responsible for writing tests, deciding which tests to automate, and executing automated tests.  “Some testers would rather break things than make things”.
    • Dorothy mentioned “checking” but did not use the term herself during the tutorial.
    • Automation should be like a butler for the testers.  It should take care of the tedious and monotonous, so the testers can do what they do best.
  • A “pilot” is a great way to get started with automation.
    • Calling something a “pilot” forces reflection.
    • Set easily achievable automation goals and reflect after 3 months.  If goals were not met, try again with easier goals.
  • Bad Test Automation Objects– And Why:
    • Reduce the number of bugs found by users – Exploratory testing is much more effective at finding bugs.
    • Run tests faster – Automation will probably run tests slower if you include the time it takes to write, maintain, and interpret the results.  The only testing activity automation might speed up is “test execution”.
    • Improve our testing – The testing needs to be improved before automation even begins.  If not, you will have poor automation.  If you want to improve your testing, try just looking at your testing.
    • Reduce the cost and time for test design – Automation will increase it.
    • Run regression tests overnight and on weekends – If your automated tests suck, this goal will do you no good.  You will learn very little about your product overnight and on weekends.
    • Automate all tests – Why not just automated the ones you want to automate?
    • Find bugs quicker – It’s not the automation that finds the bugs, it’s the tests.  Tests do not have to be automated, they can also be run manually.
  • The thing I really like about Dorothy’s examples above, is that she helps us separate the testing activity from the automation activity.  It helps us avoid common mistakes, such as forgetting to focus on the tests first.
  • Good Test Automation Objectives:
    • Free testers from repetitive test execution to spend more time on test design and exploratory testing – Yes!  Say no more!
    • Provide better repeatability of regression tests – Machines are good checkers.  These checks may tell you if something unexpected has changed.
    • Provide test coverage for tests not feasible for humans to execute – Without automation, we couldn’t get this information.
    • Build an automation framework that is easy to maintain and easy to add new tests to.
    • Run the most useful tests, using under-used computer resources, when possible – This is a better objective than running tests on weekends.
    • Automate the most useful and valuable tests, as identified by the testers – much better than “automated all tests”.

Last week, at STARCanada, I met several enthusiastic testers who might make great testing conference speakers.  We need you.  Life is too short for crappy conference talks.

I’m no pro by any means.  But I have been a track speaker at STARWest,  STARCanada, STPCon, and will be speaking at STAREast in 2 weeks. 

Ready to give it a go?  Here is my advice on procuring your first speaking slot:

  1. Get some public speaking experience.  They are probably not going to pick you without speaking experience.  If you need experience, try speaking to a group of testers at your own company, at an IT group that meets within your city, volunteer for an emerging topic talk or sign up for a lightning talk at a conference that offers those, like CAST.
  2. Come up with a killer topic.  See what speakers are currently talking about and talk about something fresh.  Make sure your topic can appeal to a wider audience.  Experience reports seem appealing.
  3. Referrals – meet some speakers or industry leaders with some clout and ask them to review your talk.  If they like it, maybe they would consider putting in a good word for you.
  4. Pick one or more conferences and search for their speaker submission deadlines and forms (e.g., Speaking At SQE Conferences).  If you’ve attended conferences, you are probably already on their mailing list and may be receiving said requests.  I’m guessing the 2014 SQE conference speaker submission will open in a few months.
  5. Submit the speaker submission form.  Make sure you have an interesting sounding title.  You’ll be asked for a summary of your talk including take-aways and maybe how you intend to give it.  This is a good place to offer something creative about the way you will deliver your topic (e.g., you made a short video, you will do a hands-on group exercise).
  6. Wait.  Eventually you’ll receive a call or email.  Sound competent.  Know your topic and be prepared to answer tough questions about it.
  7. If you get rejected.  Politely ask what you could do differently to have a better chance of getting picked in the future.

It is not easy to get picked.  I was rejected several times and eventually got a nice referral from Lynn McKee, an experienced speaker with a great reputation; that helped.  One of my friends and colleagues, who is far more capable than I am, IMO, has yet to get picked up as a speaker.  So I don’t know what secret sauce they are looking for.

Good luck!

 

BTW - Speaking at conferences has both advantages and disadvantages to consider.

Advantages:

  • The opportunity to build your reputation as an expert of sorts in the testing community.
  • It helps you refine your ideas and possibly spread knowledge.
  • Free registration fees.  This makes it more likely your company will pay your hotel/travel costs and let you attend.

Disadvantages:

  • Public speaking is scary as hell for most of us.  The weeks leading up to a conference can be stressful.
  • Putting together good talks and practicing takes lots of time.  I took days off work to prepare.

Don’t you just hate it when your Business Analysts (or others) beat you to it and point out bugs before you have a chance to?

It feels so unfair!  They can send an email that says, “the columns aren’t in the right order, please fix it” and the programmers snap to attention like good little soldiers.  Whereas, you saw the same problem but you are investigating further and confirming your findings with multiple oracles.

Well, this is not a bug race.  There is no “my bug”.  If someone else on your team is reporting problems, this helps you.  And it certainly helps the team.  You may want to observe the types of things these non-testers report and adjust your testing to target other testing.

But try to convert your frustration to admiration.  Tell them “nice catch” and “thanks for the help”.  Encourage more of the same.

I’m not asking if they *can* run unattended.  I’m asking if they do run unattended…consistently…without ever failing to start, hanging, or requiring any human intervention whatsoever…EVER.
Automators, be careful.  If you tell too many stories about unattended check-suite runs, the non-automators just might start believing you.  And guess what will happen if they start running your checks?  You know that sound when Pac-Man dies, that’s what they’ll think of your automated checks.
I remember hearing a QA Director attempt to encourage “test automation” by telling fantastical stories of his tester past:
“We used to kick off our automated tests at 2PM and then go home for the day.  The next day, we would just look at the execution results and be done.”
Years later, I’ve learned to be cynical about said stories.  In fact, I have yet to see an automated test suite (including my own) that consistently runs without ever requiring the slightest intervention from humans, who unknowingly may:

  • Prep the test environment “just right” before clicking “Run”.
  • Restart the suite when it hangs and hope the anomaly goes away.
  • Re-run the failed checks because they normally pass on the next attempt.
  • Realize the suite works better when kicked off in smaller chunks.
  • Recognize that sweet spot, between server maintenance windows, where the checks have a history of happily running without hardware interruptions.
IMO, it’s not a problem if the automator has to periodically do one or more of the above.  It’s only a problem if we, as automators, spread untruths about the real effort behind our automated checks.



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.