Mar 22, 2012

The Test Automation Experience Paradox

A tester made an interesting observation yesterday; all the testing job positions she came across required test automation experience.

She then stated, all the companies she has worked for have attempted to use test automation but have failed.  And even though she was involved in said test automation, she has yet to accumulate a test automation success story, the kind she might tell in a job interview.…unless she lies, of course (which I suspect is common).

This paradox may not exist in software companies (i.e., companies whose main product is software) because they probably throw enough money at test automation to make it successful.  But those of us working in the IT basements, on the shoe string budgets, of non-software companies, find the test automation experience paradox all too real.

20 comments:

  1. I've worked as a tester both in software companies and in the "IT basement" type of company. In my experience, you're right - test automation efforts seem to have a higher failure rate outside of software shops.

    One of the factors that I've seen contribute to this is that in the "IT basement" type of shop, we often end up working with off-the-shelf, 3rd-party applications, where we have no control over the "testability" of the app. In a software shop, if the app isn't built in such a way as to enable test automation, we can ask the developer to change it.

    ReplyDelete
  2. I've worked as a tester both in software companies and in the "IT basement" type of company. In my experience, you're right - test automation efforts seem to have a higher failure rate outside of software shops.

    One of the factors that I've seen contribute to this is that in the "IT basement" type of shop, we often end up working with off-the-shelf, 3rd-party applications, where we have no control over the "testability" of the app. In a software shop, if the app isn't built in such a way as to enable test automation, we can ask the developer to change it.

    ReplyDelete
  3. I can imagine the job posting: "Wanted, failed automation engineer".

    ReplyDelete
  4. Hi Eric,

    What would you call "successful test automation experience"?
    What is your definition of test automation, by the way?

    ReplyDelete
  5. Iiya, I would call successful test automation when your automated tests (I should be calling them "checks") tell you something important about your product that you would have otherwise attempted to collect manually, and those automated checks tell you much quicker than you could have determined manually.

    What do you think?

    As far as my definition of an automated test, I like Michael Bolton's definition of a check: an observation linked to a decision rule, that results in a bit (e.g., pass/fail)

    ReplyDelete
  6. I have often said that test automation rarely equates to "automated testing". By this I mean, that as a tester, there is much that I have automated or scripted to help me with my job that had little to do with testing a specific application.

    For example, I've written scripts that parse through log files looking for one tiny needle in GB of logs. I've written scripts that helped generate test data. You start to see my point - the only thing that makes these "test" automation is the fact that they automate tasks, that I as a tester, need to complete.

    Just my two cents, anyway.

    ReplyDelete
  7. At my company, we have "functional" testers and we have "automation" testers. I am a "functional" tester with a decade of manual testing experience. I've never been given the opportunity to learn automation because my company doesn't want to pay to train me. I find this odd, because our testing director is a champion of automation; she approaches every project with the question "can we automate this?" Trying to explain to her that time and energy are often wasted on useless automation is an exercise in futility. Here's a case in point:

    I was tasked with testing an ancient Oracle process that had long been in production but had never been officially tested. I created all the necessary test cases, data, SQL, etc. Because this process would need to be tested annually for the remaining life of the system, the director stepped in and asked the usual "can we automate this?" question. I thought it was a bit of wheel-reinvention at that point, but demured and provided the necessary information on performing the test to the automation team. The automator then took 5 weeks to create a massive script, claiming that this would alleviate the pain of testing the process annually at system rollover. I found this claim dubious, so decided to see for myself which was the better (e.g., faster, more accurate) way. I mean, after creating a script with over a thousand lines of code to test a process that can be completed in 5 minutes, I expected this baby to not only perform the task at hand, but to give my car an oil change and walk my dog to boot. So like John Henry with his sledgehammer, I went head-to-head with the machine. I kicked off the script on a lab box and went about manually testing the process on my own PC. Guess what? I completed my manual testing (and found 7 true bugs) before the script was halfway finished. And even then, when I got the output log generated from the automation script I found it had been scoped so narrowly that it didn't achieve the intended result at all, produced a huge list of "bugs" that were actually intended/expected results, and caught only 1 of the 7 bugs I found using my old school exploratory technique.

    Thank you for raising this paradox. It's hard out there for a "functional" tester!

    ReplyDelete
  8. I second your thought Eric, ken! Test Automation should assist the tester to perform his job better or test better and not replace the tester himself. Now this prompts me to a question “Is the role of a ‘Test Automation Engineer’ full time?

    Unfortunately it has become a trend that many companies are looking for Test Automation Engineer without even looking at manual testing skills.
    I hardly see any job openings looking for the Tester who has got experience in Exploratory Testing, Context Driven Testing, Rapid Software Testing, etc

    I only hope this paradox is removed from people minds soon.

    ReplyDelete
  9. SSS, nice points! IMO a big problem is: once people get the coding skills they loose interest in the human skills.

    ReplyDelete
  10. Since everyone is looking for cost cutting methods they are now going for automation testing skills i believe.

    ReplyDelete
  11. The false assumption is that automation is more cost effective (aka cheaper) than manual testing - because you can replay it over and over and over.

    But as Eric and James Bach and others have said - that kind of automation is not TESTING but CHECKING.

    My experience, working in a large enterprise testing group, has been that automation always takes on the effort of a major development project and ends up costing $$$$.

    To keep things on the more cost effective side - I tend to rely on scripts that help me with the mundane and highly error-prone-human tasks - that I don't get upset if I have to trash them or replace them. In other words, I take much the attitude with my "automation" that I do with my manual tests - if it isn't finding issues and being production - trash it an write something new.

    ReplyDelete
  12. Re: #5

    Eric,

    Sure, I'm happy to use "automated checks" term. The more we use it, the faster it spreads, I guess.

    My pitch, though, was on automated test/ing/, not test. In a vein similar to one outlined by Ken in reply#6. Going to the roots, my question was, by and large, inspired by James Bach's "Agile Test Automation" presentation: http://www.satisfice.com/presentations/agileauto.pdf

    Given your definition of successful test automation (tell something important I'd like to know, and tell it much quicker), I'm tempted to say that I saw a lot of such automation. Really, it's hard to imagine the opposite (leaving aside situation when one automates something he doesn't need). Yet, why not consider another dimension: automation collects (or helps to collect) imformation I wouldn't be able to collect without it at all? My introduction to test automation was, actually, through load testing. What do you think about said automation in this context?

    I believe most disappointments associated with test automation come from the fact that certain people try to sell it as a silver bullet (like promising to replace human testers). The moment you stop treating it this way, the dawn breaks :-)

    PS. After several years of work at software companies I'm yet to see any considerable amount of money being thrown in my direction. Maybe, it's just my bad luck, of course.

    ReplyDelete
  13. Ilya, yes, absolutely! Using automated to find info that would be difficult to find manually is an excellent use. That may still fit into my definition though, right? It would be possible to manually perform a load test, if I have enough resources. But it would be quicker to use automation.

    ReplyDelete
  14. Eric, your definition's good, that's why it can include a lot of (or all) useful examples :-)

    Still, "if I have enough resources" is rarely the case (it's hard for me to associate "shoe string budget" with plenty of resources, anyway). For instance, let us suppose your system should handle, say, 10 simultaneously working users. How would you test this? Sure, one way is to have 10 testers. I witnessed uTest project when about hundred people were simultaneously logging to the system under test, used it for some predefined timeframe, then described their experiences. This is good. This is indeed good as one collects judgments of a hundred humans. Still I believe most of us can't afford throwing people at such problems every time.

    Another example is when I'm interested in how some server application behaves after, say, a month of continuous normal usage. Again, one way is to put a human beings to work with a system for a month. Sometimes it may be the best way. But even when it is the case, I imagine such an idea would be a hard sell.

    Also several types of hard-to-catch problems (like race conditions, weird memory corruptions etc.) come to mind. In my experience, people often let such problems go, especially when time is tight.

    You may argue (maybe, rightfully) that all this is actually "make it quicker" approach, and can be done without automation given reasonable, and generally obtainable, resources. Bear with me for one more moment :-) The thing I'm possessed with these days is high volume test automation. It's promise is to "expose problems that can’t be found in less *expensive* ways". See, for instance, Cem Kaner's http://www.kaner.com/pdfs/highvolCSTER.pdf.

    ReplyDelete
  15. Test Automation failures should be considered as valuable Lessons Learned and account for Test Automation experience.

    And I would also like to point that the this paradox DOES exist also in Sw Companies, mainly because the same reasons as in other companies; Unrealistic expectations, difficulty in understanding the real ROI this provides and therefore a slim invest in this area (in all kind of resources).

    I would like to take the chance to point to this book where many experiences around this subject are shared.
    http://www.amazon.com/Experiences-Test-Automation-Studies-Software/dp/0321754069

    ReplyDelete
  16. Hi,

    I am auto-tester, I use selenium and QTP, how to defined the automation testing is failed? difficult. It depends what you want from the auto-testing tool.

    I feel I can help the tester to test the functionality is simple but need more "boring" repeat steps.

    Other functionality is very complex or involved the 3rd party API or etc, sometimes, cannot be tested by auto-tool, but, if you have sandbox of it, it is still possible.


    I love auto testing.

    ReplyDelete
  17. Hi,

    I am auto-tester, I use selenium and QTP, how to defined the automation testing is failed? difficult. It depends what you want from the auto-testing tool.

    I feel I can help the tester to test the functionality is simple but need more "boring" repeat steps.

    Other functionality is very complex or involved the 3rd party API or etc, sometimes, cannot be tested by auto-tool, but, if you have sandbox of it, it is still possible.


    I love auto testing.

    ReplyDelete
  18. Thank you sharing this post. Anything on Automated software testing is always worth reading.

    ReplyDelete
  19. I would also state that automation is not a one-time investment. Many companies buy the package, possibly spend on some training, give the QA personnel a couple weeks to get "ramped up", and then want to know what you have automated and how much your test effectiveness has increased for their ROI. It obviously does not happen that quickly.

    Of course the usual record-playback is what most newbies get comfortable with and teams start spending huge amounts of time investing in that methodology to find the next build/release changes things and all of a student that same huge effort requires things have to be rerecorded.

    Only with proper investing in a tool, its framework development, and overall scripting knowledge of the testers can help reduce the failure risk of automation. We have addressed this to a degree by having professional experienced automation engineers/developers help build the framework and application interfaces but the "manual" testers put together the actual test cases and high level automation scripts. We allow our "manual" QA personnel to do this by using a Domain Specific Language (DSL) that abstracts away some of the core element identifiers and keeps the testers focused on the business requirements and testing knowledge they have the experience in.

    Automation should be to make the QA personnel jobs easier, more efficient, or more practically applied. Also just because you can automate something does not mean you should if you consider the opportunity cost spent when comparing automation to manual efforts and other critical functions that could be addressed - as detailed by “anonymous functional tester”

    ReplyDelete
  20. This article made me chuckle and try to remember a test automation project I have been involved in and I could honestly call truly successful. I struggled. I've seen several projects that enjoyed a very brief period of sucess, normally early in the project before there were too many test cases. Once the number of tests builds up and they start failing and no-one can find enough time to maintain, manage and analyse them, that's when the problem starts. The resolution is normally to decide the original testing tool was rubbish, bin all the tests and start from scratch. I've blogged around this subject here: http://www.hoinick.com/wordpress/putting-our-trust-in-the-hands-of-machines/

    ReplyDelete