When Poor Test Documentation Hurts
5 comments Posted by Eric Jacobson at Thursday, December 19, 2013I would much rather test than create test documents. Adam White once told me, “If you’re not operating the product, you’re not testing”.
It’s soooooo easy to skip all documentation and dive right in to the testing. It normally results in productive testing and nobody misses the documents. Until…three years later, when the prog makes a little change to a module that hasn’t been tested since. The team says the change is high risk and asks you which tests you executed three year ago and how long it took.
Fair questions. I think we, as testers, should be able to answer. Even the most minimal test documentation (e.g., test fragments written in notepad) should be able to answer those questions.
If we can’t answer relatively quickly, we may want to consider recording better test documentation.
Failure Story #3 – Failed Conference Proposal
3 comments Posted by Eric Jacobson at Friday, December 06, 2013Warning: This is mostly a narcissistic post that will add little value to the testing community.
I’ve been pretty depressed about my proposal not getting picked for Let’s Test 2014. Each of my proposals have been picked for STPCon and STAR over the past three years; I guess I was getting cocky. I put all my eggs in one basket and only proposed to Let’s Test. My wife and I were planning to make a vacation out of it…our first trip to Scandinavia together.
Despite my rejection, my VP graciously offered to send me as an attendant but I wallowed in my own self pity and turned her down. In fact, I decided not to attend any test conferences in 2014. Pretty bitter, huh?
I know I could have pulled off a kick-ass talk with the fairly original and edgy topic I submitted. I dropped names. I got referrals from the right people. My topic fit the conference theme perfectly, IMO. So why didn’t I make the cut?
The Let’s Test program chairs have not responded to my request for “what I could have done differently to get picked”. Lee Copeland, the STAR program chair was always helpful in that respect. But I don’t blame the Let’s Test program chairs. Apparently program chairs have an exhausting job and they get requests for feedback from hundreds of rejected speakers.
Fortunately, my mentor and friend, Michael Bolton read my proposal and gave me some good honest feedback on why I didn’t get picked. He summarized his feedback into three points which I’ll paraphrase:
- A successful pitch to Let’s Test involves positioning your talk right in the strike zone of an experience report. You seemed to leave out the teensy, weensy little detail that you’re an N-year test manager at Turner, and that you’re telling a story about that here.
- Apropos of that, tell us about the story that you’re going to tell. You’ve got a bunch of points listed out, but they seem disjointed and the through line isn’t clear to me. For example, what does the second point have to do with the first? The fourth with the third?
- Drop the dopey idea of “learning objectives”, which is far less important at Let’s Test than it may be at other conferences.
Bolton also directed me to his tips on writing a killer conference proposal, which make my How To Speak At a Testing Conference look amateur at best.
So there it is. One of my big testing-related failure stories. Wish me luck next year when it give it another go, for Let’s Test 2015…man that seems a long ways off.
Failure Story #2– This App Will Never Be Automated
6 comments Posted by Eric Jacobson at Tuesday, November 26, 2013Here’s another failure story, per the post where I complained about people not telling enough test failure stories.
Years ago, after learning about Keyword-Driven Automation, I wrote an automation framework called OKRA (Object Keyword-Driven Repository for Automation). @Wiggly came up with the name. Each automated check was written as a separate Excel worksheet, using dynamic dropdowns to select from available Action and Object keywords in Excel. The driver was written in VBScript via QTP. It worked, for a little while, however:
- One Automator (me) could not keep up with 16 programmers. The checks quickly became too old to matter. FAIL!
- An Automator with little formal programming training, writing half-ass code with VBScript, could not get help from a team of C# focused programmers. FAIL!
- The product under test was a .Net Winforms app full of important drag-n-drop functionality, sitting on top of constantly changing, time-sensitive, data. Testability was never considered. FAIL!
- OKRA was completely UI-based automation. FAIL!
Later, a product programmer took an interest in developing his own automation framework. It would allow manual testers to write automated checks by building visual workflows. This was a Microsoft technology called MS Workflow or something like that. The programmer worked in his spare time over the course of about a year. It eventually faded into oblivion and was never introduced to testers. FAIL!
Finally, I hired a real automator, with solid programming skills, and attempted to give it another try. This time we picked Microsoft’s recently launched CodedUI framework and wrote the tests in C# so the product programmers could collaborate. I stood in front of my SVP and project team and declared,
“This automation will shave 2 days off our regression test effort each iteration!”
However:
- The automator was often responsible for writing automated checks for a product they barely understood. FAIL!
- Despite the fact that CodedUI was marketed by Microsoft as being the best automation framework for .Net Winform apps, it failed to quickly identify most UI objects, especially for 3rd party controls.
- Although, at first, I pushed for significant amounts of automation below the presentation layer, the automator focused more energy on UI automation. I eventually gave in too. The tests were slow at best and human testers could not afford to wait. FAIL! Note: this was not the automators failure, it was my poor direction.
At this point, I’ve given up all efforts to automate this beast of an application.
Can you relate?
Be Careful When Testing With Kitchen Windows
1 comments Posted by Eric Jacobson at Tuesday, November 19, 2013Have you ever been to a restaurant with a kitchen window? Well, sometimes it may be best not to show the customers what the chicken looks like until it is served.
A tester on my team has something similar to a kitchen window for his automated checks; the results are available to the project team.
Here’s the rub:
His new automated check scenario batches are likely to result in…say, a 10% failure rate (e.g., 17 failed checks). These failures are typically bugs in the automated checks, not the product under test. Note: this project only has one environment at this point.
When a good curious product owner looks through the kitchen window and sees 17 Failures, it can be scary! Are these product bugs? Are these temporary failures?
Here’s how we solved this little problem:
- Most of the time, we close the curtains. The tester writes new automated checks in a sandbox, debugs them, then merges them to a public list.
- When the curtains are open, we are careful to explain, “this chicken is not yet ready to eat”. We added an “Ignore” attribute to the checks so they can be filtered from sight.
Failure Story #1– BDD/ATDD Failure To Launch
3 comments Posted by Eric Jacobson at Wednesday, November 13, 2013BDD/ATDD is all the rage these days. The cynic in me took a cheap shot at it here. But the optimist in me really REALLY thinks it sounds cool. So I set off to try it….and failed twice.
First Fail:
I’m not involved in many greenfield projects so I attempted to convince my fellow colleagues to try BDD with their greenfield project. I started with the usual emails, chock full of persuasive BDD links to videos and white papers. Weeks went by with no response. Next, we scheduled a meeting so I could pitch the idea to said project team. To prepare, I read Markus Gartner’s “ATDD By Example” book, took my tester buddy, Alex Kell, out to lunch for an ATDD Q & A, and read a bunch of blog posts.
I opened my big meeting by saying, “You guys have an opportunity to do something extraordinary, something that has not been done in this company. You can be leaders.” (It played out nicely in my head before hand) I asked the project team to try BDD, I proposed it as a 4 to 6 month pilot, attempted to explain the value it would bring to the team, and suggested roles and responsibilities to start with.
Throughout the meeting I encountered reserved reluctance. At its low point, the discussion morphed into whether or not the team wanted to bother writing any unit tests (regardless of BDD). At its high point, the team agreed to do their own research and try BDD on their prototype product. The team’s tester walked away with my “ATDD By Example” book and I walked away with my fingers crossed.
Weeks later, I was matter-of-factly told by someone loosely connected to said project team, “Oh, they decided not to try BDD because the team is too new and the project is too important”. It’s that second part that always makes me shake my head.
Second Fail:
By golly I’m going to try it myself!
One of my project teams just started a small web-based spin-off product, a feedback form. I don’t normally have the luxury of testing web products and it seemed simple enough so I set out to try BDD on my own. I choose SpecFlow and spent several hours setting up all the extensions and NuGet packages I needed for BDD. I got the sample Gherkin test written and executing and then my test manager job took over, flinging me all kinds of higher priority work. Three weeks later, the feedback form product is approaching code complete and I realize it just passed me by.
…Sigh.
…are not always the full truth. Is that hurting our craft?
Last week, I attended the first Software Testing Club Atlanta Meetup. It was organized by Claire Moss and graciously hosted by VersionOne. The format was Lean Coffee, which was perfect for this meeting.
Photo by Claire Moss
I’m not going to blog about the discussion topics themselves. Instead, I would like to blog about a familiar Testing Story pattern I noticed:
During the first 2 hours, it seemed to me, we were telling each other the testing stories we wanted to believe, the stories we wanted each other to believe. We had to make first impressions and establish our personal expertise, I guess. But during the 3rd hour, we started to tell more candid stories, about our testing struggles and dysfunctions. I started hearing things like, “we know what we should be doing, we just can’t pull it off”. People who, at first impression, seemed to have it all together, seemed a little less intimidating now.
When we attend conference talks, read blog posts, and socialize professionally, I think we are in a bubble of exaggerated success. The same thing happens on Facebook, right? And people fall into a trap: The more one uses Facebook, the more miserable one feels. I’m probably guilty of spreading exaggerated success on this blog. I’m sure it’s easier, certainly safer, to leave out the embarrassing bits.
That being said, I am going to post some of my recent testing failure stories on this blog in the near future. See you soon.
Testing Against Live Read-Only Production Data
3 comments Posted by Eric Jacobson at Tuesday, October 08, 2013My data warehouse project team is configuring one of our QA environments to be a dynamic read-only copy of production. I’m salivating as I try to wrap my head around the testing possibilities.
We are taking about 10 transactional databases from one of our QA environments, and replacing them with 10 databases replicated from their production counterparts. This means, when any of our users perform a transaction in production, said data change will be reflected in our QA environment instantly.
Expected Advantages:
- Excellent Soak Testing – We’ll be able to deploy a pre-production build of our product to our Prod-replicated-QA-environment and see how it handles actual production data updates. This is huge because we have been unable to find some bugs until our product builds experience real live usage.
- Use real live user scenarios to drive tests – We have a suite of automated checks that invoke fake updates in our transactional data bases, then expect data warehouse updates within certain time spans. The checks use fake updates. Until now. With the Prod-replicated-QA-environment, we are attempting to programmatically detect real live data updates via logging, and measure those against expected results.
- Comparing reports – A new flavor of automated checks is now possible. With the Prod-replicated-QA-environment, we are attempting to use production report results as a golden master to compare to QA report results sitting on the pre-production QA build data warehouse. Since the data warehouse data to support the reports should be the same, we can expect the report results to match.
Expected Challenges:
- The Prod-replicated-QA-environment will be read-only. This means instead of creating fake user actions whenever we want, we will need to wait until they occur. What if some don’t occur…within the soak test window?
- No more data comparing? - Comparing transactional data to data warehouse data has always been a bread and butter automated check we’ve performed. These checks check data integrity and data loading. Comparing a real live quickly changing source to a slowly updating target will be difficult at best.