Here’s another failure story, per the post where I complained about people not telling enough test failure stories.
Years ago, after learning about Keyword-Driven Automation, I wrote an automation framework called OKRA (Object Keyword-Driven Repository for Automation). @Wiggly came up with the name. Each automated check was written as a separate Excel worksheet, using dynamic dropdowns to select from available Action and Object keywords in Excel. The driver was written in VBScript via QTP. It worked, for a little while, however:
- One Automator (me) could not keep up with 16 programmers. The checks quickly became too old to matter. FAIL!
- An Automator with little formal programming training, writing half-ass code with VBScript, could not get help from a team of C# focused programmers. FAIL!
- The product under test was a .Net Winforms app full of important drag-n-drop functionality, sitting on top of constantly changing, time-sensitive, data. Testability was never considered. FAIL!
- OKRA was completely UI-based automation. FAIL!
Later, a product programmer took an interest in developing his own automation framework. It would allow manual testers to write automated checks by building visual workflows. This was a Microsoft technology called MS Workflow or something like that. The programmer worked in his spare time over the course of about a year. It eventually faded into oblivion and was never introduced to testers. FAIL!
Finally, I hired a real automator, with solid programming skills, and attempted to give it another try. This time we picked Microsoft’s recently launched CodedUI framework and wrote the tests in C# so the product programmers could collaborate. I stood in front of my SVP and project team and declared,
“This automation will shave 2 days off our regression test effort each iteration!”
- The automator was often responsible for writing automated checks for a product they barely understood. FAIL!
- Despite the fact that CodedUI was marketed by Microsoft as being the best automation framework for .Net Winform apps, it failed to quickly identify most UI objects, especially for 3rd party controls.
- Although, at first, I pushed for significant amounts of automation below the presentation layer, the automator focused more energy on UI automation. I eventually gave in too. The tests were slow at best and human testers could not afford to wait. FAIL! Note: this was not the automators failure, it was my poor direction.
At this point, I’ve given up all efforts to automate this beast of an application.
Can you relate?
Have you ever been to a restaurant with a kitchen window? Well, sometimes it may be best not to show the customers what the chicken looks like until it is served.
A tester on my team has something similar to a kitchen window for his automated checks; the results are available to the project team.
Here’s the rub:
His new automated check scenario batches are likely to result in…say, a 10% failure rate (e.g., 17 failed checks). These failures are typically bugs in the automated checks, not the product under test. Note: this project only has one environment at this point.
When a good curious product owner looks through the kitchen window and sees 17 Failures, it can be scary! Are these product bugs? Are these temporary failures?
Here’s how we solved this little problem:
- Most of the time, we close the curtains. The tester writes new automated checks in a sandbox, debugs them, then merges them to a public list.
- When the curtains are open, we are careful to explain, “this chicken is not yet ready to eat”. We added an “Ignore” attribute to the checks so they can be filtered from sight.
BDD/ATDD is all the rage these days. The cynic in me took a cheap shot at it here. But the optimist in me really REALLY thinks it sounds cool. So I set off to try it….and failed twice.
I’m not involved in many greenfield projects so I attempted to convince my fellow colleagues to try BDD with their greenfield project. I started with the usual emails, chock full of persuasive BDD links to videos and white papers. Weeks went by with no response. Next, we scheduled a meeting so I could pitch the idea to said project team. To prepare, I read Markus Gartner’s “ATDD By Example” book, took my tester buddy, Alex Kell, out to lunch for an ATDD Q & A, and read a bunch of blog posts.
I opened my big meeting by saying, “You guys have an opportunity to do something extraordinary, something that has not been done in this company. You can be leaders.” (It played out nicely in my head before hand) I asked the project team to try BDD, I proposed it as a 4 to 6 month pilot, attempted to explain the value it would bring to the team, and suggested roles and responsibilities to start with.
Throughout the meeting I encountered reserved reluctance. At its low point, the discussion morphed into whether or not the team wanted to bother writing any unit tests (regardless of BDD). At its high point, the team agreed to do their own research and try BDD on their prototype product. The team’s tester walked away with my “ATDD By Example” book and I walked away with my fingers crossed.
Weeks later, I was matter-of-factly told by someone loosely connected to said project team, “Oh, they decided not to try BDD because the team is too new and the project is too important”. It’s that second part that always makes me shake my head.
By golly I’m going to try it myself!
One of my project teams just started a small web-based spin-off product, a feedback form. I don’t normally have the luxury of testing web products and it seemed simple enough so I set out to try BDD on my own. I choose SpecFlow and spent several hours setting up all the extensions and NuGet packages I needed for BDD. I got the sample Gherkin test written and executing and then my test manager job took over, flinging me all kinds of higher priority work. Three weeks later, the feedback form product is approaching code complete and I realize it just passed me by.