I’ve got a test automation engineer and a bunch of manual testers. The plan, of course, is to teach the manual testers to write automated tests were appropriate; a plan I have seen fail several times. I figured I would take a crack at it anyway. In the meantime, I’m trying to keep the automation engineer busy writing valuable tests.
Everybody knows, choosing the right tests to automate is tricky. But what I’ve really been struggling with is finding the right skillset to actually design (determine the detailed steps) of each test automation candidate.
It turns out, the test automation engineers suck at it, and so do the manual testers. Who knew?
You see, manual testers live in a dream world, where test setup can be customized on the fly, validation can be performed with whatever info the tester decides is important at that moment, and teardown can be skipped altogether. Try asking those manual testers to stub out some repeatable tests to automate. I suspect you will run into a problem.
The test community loves to poke fun at traditional, script-happy, manual testers. Some testers (myself included) talk about exploratory testing or test case fragments as being the cool way to test. They scoff at testers who use rigorous step-by-step test cases. Have you ever encountered said traditional testers? I certainly haven’t. I’m sure they exist somewhere. But I think most of them only exist in a myth we’ve created in order to feel like test innovators.
Why am I such a skeptic about these folks? The truth is, writing repeatable, detailed, step-by-step test cases is really really really hard. If you’ve attempted to automate end-to-end business facing test cases for a complex system, you’ll know exactly what I mean.
On the other side, the test automation engineers are bored with manual testing and obsessed with trying to achieve the near-impossible ROI goals imposed by management. They want the manual testers to write them detailed tests because they don’t have time to learn the AUT.
A tester who makes time for manual testing and test automation has the magic skillset necessary to be an effective test automation author. Each of us should strive to achieve this skillset.

5 comments:

  1. Jim Grey said...

    My experience with automation tools is that they just don't work smoothly enough for a manual tester to want to futz with them. They record a test and the next time they run it it fails, and they sink lots of time into debugging only to finally throw in the towel and decide automation isn't worth it.

    What I'm trying where I work is to have a small, tight team of automation engineers who build a framework on which tests can be built. We've built a keyword-driven framework on Watin; tests are basically spreadsheets, with the test steps spelled out in order, that the framework essentially interprets at runtime. Some of our testers have learned how to build tests on our framework. We haven't rolled this out to all testers yet as we're finding that some testers can grapple with this and others can't. In Indianapolis, where we are, it's hard to find testers with coding skills -- building tests in our framework doesn't demand hard coding, but does require a basic understanding of linear programming logic. Some of our testers pick that up easier than others.

    The other thing we're doing is using a record-and-playback tool built into Microsoft Team Foundation Server 2010 (we're a C# .NET shop) to record the clickstream of their testing as they execute it. They can do that way faster than they can write out the test by hand. Our strategy right now is to build out a set of smoke tests across our sprawling application. The testers can just execute those tests out of their head for the most part. So they do while recording, and they hand us the resulting clickstream file (with some notes about validation points, as the recording functionality isn't smart enough to figure that out), and my automation engineers convert those into spreadsheets. That seems to go pretty quickly for both parties, and avoids the tester having to write a detailed step-by-step test script.

    We still have some hitches to work out in all of this, but this is the most workable situation I've been involved with, and I've been managing test automation for 10 years now.

  2. Albert Gareev said...

    A few excellent observations, but a conclusion I didn't expect.

    My 2 cents.

    * If test analysts behave as manual testers tell them to be sapient testers.

    * If test automation engineers are bored with manual testing either make them understand the superiority of sapient testers or let them go [out of your team].

    * Let the right test automation engineers do exploratory testing and learn about the app. As a bonus, they'll find some bugs.

    * Don't convert all testers into automation. Have one toolsmith, owner of the keyword-driven (at least!)automation framework, and 2-3 test automation analysts as an automation team serving needs of 3-6 testing teams.

    Thank you,
    Albert Gareev
    http://automation-beyond.com/

  3. Anonymous said...

    Great read and very true...Manual testers should aspire to grow and the only place to grow is into automated testing which makes you more indemand and also takes your bug finding ability to the next level...

  4. Rasmus said...

    I would like the Automated Test Engineer "profession" to just die already. It is just somebody who can work around the suck in one (usually expensive) functional test automation tool. They don't work with the rest of the team, know nothing about the AUT, expect detailed test cases so they can "automate" them - detailed "manual" test cases look pretty much the same as automated tests in a keyword-driven framework. What value does the automation specialist provide?
    And you can not get away from the fact that if you design your application without testability features in mind then automating end-to-end business facing tests is going to be hard. Dependencies in the app make test setup and teardown a mess that will break every time something changes. UI frameworks without testability features also don't help - clicking X,Y coordinates, bitmap comparisons make tests even more brittle.
    I think the acceptance test driven development concept has promise, designing tests/examples early makes testability a first class requirement, but again current acceptance test tools(Fitnesse, Cucumber) leave a lot to be desired.

  5. Brian said...

    I'm surprised by the sentiment of the post and the comments. Having been a functional/manual tester, I was moved into automation and it was the best thing for me. I think there's a lot of fear in the comments here about automation tools not working "smoothly enough" that's pretty vague. Tools like Cucumber using Ruby as the language work amazingly well. I've been able to smoothly automate JSON schema, JSON data, Front end UI, Bug fixes, cross browser compatibility, etc... and have it all culled into Jenkins for reporting purposes.

    Regarding the Free Market Economy - if you have a manual tester skill set, you'll get probably 20% less money then a Automation gig (all things being equal.)

    Automation testers that don't want to manually test... well i've seen a few, but not many. I know SDET's that manually test and don't complain.

    I think you're picking outliers (like some automation guy who refuses to do manual testing) and making them the "norm" for your analysis.

    It boils down to - manual testing can be done by anyone, regardless of education. So there's more competition and ultimately less pay. The more skills you pick up to differentiate you from the "norm" the better... so you can pick up SQL/Jquery/HiveDB and a variety of other technologies... you can also pick up automation. It can't hurt.

    One thing learning to use an automation tool like Cucumber will impart is programming practices. You'll get a glimpse of what the Developers are doing and better report back bugs to a lower level, that better enables them to fix it. You can participate in code reviews and have a working knowledge of how services and code works together in your gig.

    Where I work, they use GEB (groovy based), but on the side I use Cucumber (ruby based.) Both have taught me a lot and got me to look under the hood - a place most (if not all) manual testers never look. How many manual testers are out there looking at Java code in a IDE? Participating in code reviews? Does it help to do that? of course it does.



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.