It’s true what they say; writing automated tests is waaaay more fun than manual testing. Unfortunately, fun does not always translate into value for your team.

After attempting to automate an AUT for several years, I eventually came to the conclusion that it was not the best use of my time. My test team resources, skills, AUT design and complexity, available tools, and UI-heavy WinForm AUT were a poor mix for automated testing. In the end, I had developed a decent framework, but it consisted of only 28 tests that never found bugs and broke every other week.

Recent problems with one of my new AUT’s have motivated me to write a custom automated test framework and give the whole automated test thing another whirl.

This new AUT has about 50 reports, each with various filters. I’m seeing a trend where the devs break various reports with every release. Regression testing is as tedious as it gets (completely brainless; perfect to automate) and the devs are gearing up to release another 70 additional reports! …Gulp.

In this case, several aspects are pointing towards automated test potential.

  • The UI is web-based (easier to hook into)
  • The basic executed test is ripe for a data-driven automation framework; crawl through 120 reports and perform nearly the same actions and verifications on each.
  • Most broken report errors (I’m targeting) are objectively easy to identify; a big old nasty error displays.

I wrote the proof of concept framework last week and am trying to nail down some key decisions (e.g., passing in report parameters vs. programmatically determining them). My team needs me to keep testing, so I can only work on automation during my own time…so it’s slow going.

This is my kick-off post. I’ll explain more details in future posts. More importantly, I’ll tell you if it actually adds enough value to justify the time and maintenance it will take. And I promise not to sugar coat my answer, unlike some test automation folks do, IMO.

Oh, I’m calling it JART (Jacobson’s Automated Report Tester). Apparently JART is also an acronym for "Just a Real Tragedy. We’ll see.

During last fall’s STPCon, I attended a session about showing your team the value of testing. It was presented by a guy from Keen Consultants. He showed us countless graphs and charts we could use to communicate the value of testing to the rest of our team. Boring…zzzzzzzz.

In the spirit of my previous post, Can You Judge a Tester by Their Bug List Size?, here is a more creative approach, that is way simpler and IMO more effective, at communicating your value as a tester….wear it!

(I blurred out my AUT name)

You could change it up with the number of tests you executed, if that sounds more impressive to you. Be sure to wear your shirt on a day the users are learning your AUT. That way, you can pop into the training room and introduce yourself to your users. Most of them didn’t even know you existed. They will love you!

Now I just need to come up with an easy way to increase the bug count on my shirts (e.g., velcro numbers). Because, like all good testers know, the shirt is out-dated within an hour or so.

Users change their minds. They save new items in your app, then want to delete those saved items and try again. Does your AUT support this behavior? Did you test for it?

My devs recently added a dropdown control with two values (i.e., DVS or ESP). Soon after being able to save or change the values, I stopped testing. Later, a user pointed out there is no way to remove the values from that (optional) field if you change your mind. Now, many of our dropdowns look like this:


While testing, I often look for data saving triggers (e.g., a button that says “Save”). Then I ask myself, "Okay, I saved it by mistake, now what?".
Devs and BAs are good at designing the positive paths through your AUTs. But they often overlook the paths needed to support users who change their minds or make mistakes. Your AUT should allow users to correct their mistakes. If not, your team will get stuck writing DB scripts to correct production data for scenarios users could not fix on their own. It’s your job to show your team where these weaknesses are.

In one of James Whittaker’s recent webinars , he mentioned his disappointment when tester folks brag about bug quantities. It has been popular, lately, to not judge tester skills based on bug count. I disagree.

Last Monday night I had a rare sense of tester accomplishment. After putting in a 14 hour day, I had logged 32, mostly showstopper, bugs; a personal record. I felt great! I couldn’t wait to hear the team’s reaction the next day. Am I allowed to feel awesome? Did I really accomplish anything? Damn right I did. I found many of those bugs by refining my techniques throughout the day, as I become familiar with that dev’s mistakes. I earned my pride and felt like I made a difference.

But is it fair to compare the logged bug list of two testers to determine which tester is better? I think it is...over time. Poor testers can hide on a team because there are so few metrics to determine their effectiveness. Bug counts are the simplest metric and I think it’s okay to use them sometimes.

I work with testers with varying skills and I see a direct correlation. When a tester completes an entire day of work without having logged a single bug, I see a problem. The fact is, one logged bug proves at least some testing took place. No logged bugs could mean the AUT is rock solid. But it could also mean the tester was more interested in their Facebook account that day.

“If it ain’t broke, you’re not trying hard enough.”

This silly little cliché actually has some truth. When I test something new, I start out gentle, running the happy tests and following the scripted paths…the scenarios everybody discussed. If the AUT holds up, I bump it up a notch. And so on, until the bugs start to shake loose. That’s when testing gets fun. Logging all the stupid brainless happy path bugs is just busy work to get to the fun stuff. (Sorry, a little off subject there)

Anyway, from one tester to another, don’t be afraid to celebrate your bug counts and flaunt them in front of your fellow testers…especially if it makes you feel good.

BTW - Does anyone else keep a record of most bugs logged in a day? Can you beat mine? Oh, and none of my 32 got rejected. :)

Most bug readers will agree, a simple “Expected Results” vs. “Actual Results” statement in the bug description will remove all ambiguity. But what about the bug title? Is a bug title supposed to include the expected results, actual results, or both? Every time I log a bug, I pause to consider the various title possibilities. I want a unique but concise title that will summarize the bug, making the description as unnecessary as possible. My head swims with choices…

  • If user does X after midnight, user gets Y.
  • If user does X after midnight, user should not get Y.
  • If user does X after midnight, user should get Z.
  • If user does X after midnight, user gets Y instead of ZUser should get Z when they do X after midnight.
  • etc…

Here is what I think.

There is no “best” bug title. Any bug title is good enough as long as it:

  • is unique within my bug tracking system
  • describes the problem to some extent
  • includes key words (these words will be used someday to find this bug; searching for bugs where title contains some text)

So unless someone convinces me otherwise, with a comment on this post, I have decided to just use the first distinct bug title popping into my mind and stop worrying about bug title perfection.



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.