I often hear skeptics question the value of test automation.  Their questioning is healthy for the test industry and it might flush out bad test automation.  I hope it continues.

But shouldn’t these same questions be raised about human testing (AKA Manual testing)?  If these same skeptics judged human testing with the same level of scrutiny, might it improve human testing? 

First, the common criticisms of test automation:

  • Sure, you have a lot of automated checks in your automated regression check suite, but how many actually find bugs?
  • It would take hours to write an automated check for that.  A human could test it in a few seconds.
  • Automated checks can’t adapt to minor changes in the system under test.  Therefore, the automated checks break all the time.
  • We never get the ROI we expect with test automation.  Plus, it’s difficult to measure ROI for test automation.
  • We don’t need test automation.  Our manual testers appear to be doing just fine.

Now let’s turn them around to question manual testing:

  • Sure, you have a lot of manual tests in your manual regression test suite, but how many actually find bugs?
  • It would take hours for a human to test that.  A machine could test it in a few seconds.
  • Manual testers are good at adapting to minor changes in the system under test.  Sometimes, they aren’t even aware of their adaptions.  Therefore, manual testers often miss important problems.
  • We never get the ROI we expected with manual testing.  Plus, it’s difficult to measure ROI for manual testing.
  • We don’t need manual testers.  Our programmers appear to be doing just fine with testing.


  1. Rikard Edgren said...

    Good point!

    I don't think the either/or mentality is very common, but "of course we need both" is quite un-elaborated.
    Discussions about manual versus automated testing tend to be very generic, and not epecially fruitful.
    In my experience, it is not very difficut to choose what to do, once you know the details about the Project, and have done some testing.
    But explaining all details takes too much time to write and read...

    There is also a risk that we separate these too much, a lot of good stuff happens when you combine machines and people, when you try many ways of testing in order to realize where you provide good information value.

  2. Gary said...

    The criticisms you posted, while common, indicate that the questioner has an underlying lack of understanding regarding automated testing and it's proper place.

    The best use of automated testing is as a regression test tool. Manual or automated, regression tests typically do not find many bugs. They simply ensure that new bugs have not been introduced. Robust regression tests are a project managers dream, as they can reliably document that new builds have not introduced unwanted changes in behavior.

    It is exploratory testing that tends to find the most bugs. Testers trying to break the system, or exploring less used features or options.

    In our environment, automated regression testing is used for two reasons: to ensure we don't introduce new bugs, but maybe even more importantly it is used to free up our
    testers time so that they can focus on more exploratory testing.

    If you make the argument that automated testing frees up time for more exploratory testing, which is what finds the most bugs, you may have more success with automation adoption.

  3. Eric Jacobson said...

    Gary, I'm glad you stumbled on my blog. You're preaching to the choir.

    Those are good responses to the automation questions, alone. However, what I was trying to do in this post is give you another way of responding...turning the tables back on manual testing.

    Why should a test automation skeptic be obsessed with ROI from automation if they are not obsessed with ROI from manual testing? Get it? Why does automation have different standards than manual testing?

  4. Matthew said...

    Okay, Eric, I'll bite.

    Earlier, you asked:

    "Why should a test automation skeptic be obsessed with ROI from automation if they are not obsessed with ROI from manual testing? Get it? Why does automation have different standards than manual testing?"

    This is likely because hands-on-keyboard-and-mouse testing is the status quo. Adding some amount of automation to the process is the change.

    We don't need to know the ROI of manual testing because we know the economic value add of the work. That is to say -

    Economic Value Add = Sellable Value after the process - Value without the process.

    In the case of test/fix, we know it is valuable because our customers would not accept the software without test/fix. (I am speaking of the greater test/fix process, not just providing information to decision makers). You can realize this by just suggesting a project skip testing to make a deadline.

    If the business does it, and likes the result, then worry about the EVA of manual testing. If the business laughs at the suggestion, or doesn't like the results, than the EVA of testing is doing just fine.

    hmm ... maybe I should do a blog post, stop talking about ROI, start talking about EVA. :-)

    In theory, though, most of the time, when people are talking about adding tools, the goal is to replace testing - not to reduce cycle time or improve quality. so the EVA would be the same. In that case, you can't really explain the value with EVA, you need something else - so people cling to ROI.

    And that is why people seem obsessed with the ROI of automation and not the ROI of manual testing - you don't need ROI when EVA vastly exceeds the cost to do the work.

    There, asked an answered - at least ONE of your questions, which yes, I realize where rhetorical. :-)

  5. Sudhir Patil said...

    Eric has raised a valid point about ROI, as generally this term is always referred to automation and rarely with manual testing. At the end of day both has their respective objectives/goals and hence there should not be double standard.

    IMO manual testing is an integral part Vs automation is considered as optional hence could be the reason.

  6. Michael Bolton http://www.developsense.com said...

    The discussion has a misbegotten premise: that there is one kind of testing that involves tools doing things without people involved, and that there is another kind of testing involves people doing things without tools. To see how the argument collapses, substitute "management" or "programming" in statements like those in the post above. "We never get the ROI we expect with management automation." "It would take hours for a human to program that."

    This is precisely why I reject the notions of "manual testing" and "automated testing" (http://www.developsense.com/blog/2013/02/manual-and-automated-testing/): there is no such thing as automated testing. There is automated checking, and there is tool-or automation-assisted testing. Both of these are valuable, but it's unhelpful to consider either one "automated testing", in the same sense that it's not helpful to think of cruise control as "automated driving". To do so is to ignore the social context of why we do testing, which is to help people understand the product they've got, so they can decide if it's the product they want.

    And this is precisely why we make the distinction between testing and checking: to emphasize the fact that the execution of a check can be delegated to a machine, but the testing activity that surrounds a check cannot. At the moment it's performed, a check doesn't need a social context, but the motivation behind all testing (including the design, programming, and interpretation of checks) is social, oriented towards people. This cannot be mechanized, but we can use tools in service of it. http://www.satisfice.com/blog/archives/856

    Also, as long as we keep talking about ROI in testing, we're going to look like dweebs to the financial people. In this blog post, replace the concept of "social media marketing" with "testing:: http://www.copyblogger.com/social-media-marketing-roi/

  7. Eric Jacobson said...

    Fun for me to get some hits on this one from Heusser and Bolton.

    Okay, I never actually used the term "automated testing". Instead, I used "test automation" and "automated checking". I thought "test automation" was Bolton/Bach approved to include the social part of building the checks. I guess I got that wrong.

    Yes, let's not use ROI as a means of measuring the success of checking or testing. I said that (poorly) in my post.

    The inspiration for the post really came from bullet #1; I listened to a "checking" skeptic claim: "checks that don't find bugs are worthless". By reversing the claim: "tests that don't find bugs are worthless", I amused myself. I'm amused because the reversal trick helps me question (manual) testing. And it seems to me, most in our industry are quick to question checking, but not so quick to question testing.

Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.