- Measuring your Automation might be easy. Using those measurements is not. Examples:
- # of times a test ran
- how long tests take to run
- how much human effort was involved to execute and analyze results
- how much human effort was involved to automate the test
- number of automated tests
- EMTE (Equivalent Manual Test Effort) – What effort it would have taken humans to manually execute the same test being executed by a machine. Example: If it would take a human 2 hours, the EMTE is 2 hours.
- How can this measure be useful? It is an easy way to show management the benefits of automation (in a way managers can easily understand).
- How can this measure be abused? If we inflate EMTE by re-running automated tests just for the sake of increasing EMTE, when are misleading. Sure, we can run our automated tests everyday, but unless the build is changing every day, we are not adding much value.
- How else can this measure be abused? If you hide the fact that humans are capable of noticing and capturing much more than machines.
- How else can this measure be abused? If your automated tests can not be executed by humans and if your human tests can not be executed by a machine.
- ROI (Return On Investment) – Dorothy asked the students what ROI they had achieved with the automation they created. All 6 students who answered, got it wrong; they explained various benefits of their automation, but none were expressed as ROI. ROI should be a number, hopefully a positive number.
- The trick is to convert tester time effort to money.
- ROI does not measure things like “faster execution”, “quicker time to market”, “test coverage”
- How can this measure be useful? Managers may think there is no benefit to automation until you tell them there is. ROI may be the only measure they want to hear.
- How is this measure not useful? ROI may not be important. It may not measure your success. “Automation is an enabler for success, not a cost reduction tool” – Yoram Mizrachi. You company probably hires lawyers without calculating their ROI.
- She did the usual tour of poor-to-better automation approaches (e.g., capture playback to advanced key-word driven framework). I’m bored by this so I have a gap in my notes.
- Testware architecture – consider separating your automation code from your tool, so you are not tied to the tool.
- Use pre and post processing to automate test setup, not just the tests. Everything should be automated except selecting which tests to run and analyzing the results.
- If you expect a test to fail, use the execution status “Expected Fail”, not “Fail”.
- Comparisons (i.e., asserts, verifications) can be “specific” or “sensitive”.
- Specific Comparison – an automated test only checks one thing.
- Sensitive Comparison – an automated test checks several things.
- I wrote “awesome” in my notes next to this: If your sensitive comparisons overlap, 4 tests might fail instead of 3 passing and 1 failing. IMO, this is one of the most interesting decisions an automator must make. I think it really separates the amateurs from the experts. Nicely explained, Dorothy!
After attempting to use Microsoft Test Manager 2010 for an iteration, we quickly decided not to use it. Here is why. About 3 years ago we ...
Data warehouse (DW) testing is a far cry from functional testing. As testers, we need to let the team know if the DW dimension, fact, and b...
I recently read about 15 resumes for tester positions on my team. None of them told us anything about how well the candidate can test. Her...
This sucks. I’ve been testing all day and I haven't found a single problem. No, wait… This is good, right? Clean software is the goa...
Want your bug reports to be clear? Don’t tell us about the bug in the repro steps. If your bug reports include Repro Steps and Results se...
- ► 2014 (35)
- ▼ April (5)
- ► 2012 (44)
- ► 2011 (45)
- ► 2010 (37)
- ► 2009 (45)
- ► 2008 (37)
- Teamwork (86)
- bugs (81)
- process (66)
- software testing career (49)
- automation (45)
- writing tests (38)
- Personal Excellence (37)
- Managing Testing (33)
- questions (31)
- language (29)
- testing metaphor (23)
- Tools (19)
- STPCon (10)
- heuristics (10)
- Test Cases (9)
- test blogs (9)
- CAST (8)
- Presentations (8)
- Test This (8)
- metrics (8)
- Rapid Software Testing (7)
- Silliness (7)
- Data Warehouse Testing (6)
- Kanban (6)
- STARwest (6)
- Testing Conferences (6)
- Agile (4)
- Bug Report Attributes (4)
- Don't Test It (4)
- Stareast (4)
- documentation (4)
- Failure Story (3)
- Lightning Talks (3)
- Testing Related Ideas (3)
- You're A Tester (3)
- Performance Testing (2)
- Podcast (2)
- ATDD (1)
- BDD (1)
- HATDD (1)
- Meetups (1)
Who am I?
- Eric Jacobson
- Atlanta, Georgia, United States
- My typical day: get up, maybe hit the gym, drop my kids off at daycare, listen to a podcast or public radio, do not drink coffee (I kicked it), test software or help others test it, break for lunch and a Euro-board game, try to improve the way we test, walk the dog and kids, enjoy a meal with Melissa, an IPA, and a movie/TV show, look forward to a weekend of hanging out with my daughter Josie, son Haakon, and perhaps a woodworking or woodturning project.