I've been hearing this question around here lately, probably because one of last year's performance appraisal goals for the QA department was test automation. I think it went something like this...

Managers noticed testers were complaining about being too busy. So they gave Mercury QuickTest Pro licenses to us (most of us had little automation skills if any) and told us to start automating our tests because it would free up our time. Some of the managers offered many of the "reckless assumptions" in James Bach's classic Test Automation Snake Oil article; I think the plan fell flat.

Purchasing an application like QuickTest Pro can get most testers a handful of record-playback scripts that can provide superficial value. Anything beyond that requires a significant investment in learning, time, and creativity. For starters, one must select and understand an automation framework. But even after building the coolest automation framework in the world, one is still faced with the same damn question, "What should I automate?"

I have only been working with test automation seriously for about two years but along the way my small team and I have learned enough to throw a few suggestions out as far as what tests should be automated.
  • Automate tests that cannot be performed by a human. An easy place to start is with performance testing from the user perspective. How long does a given user-triggered action take from the user’s perspective? This test cannot be performed manually (unless the tester is willing to do the same thing over and over with a stop watch and the time spans are at least several seconds). This type of test is not designed to find bugs. Instead, it is designed to collect performance information.
  • Another practical way to use a test automation tool is to write scripts that simply get your AUT into the precondition state necessary to perform other tests? Again, this type of automated test is not designed to find bugs. It's not even really a test. It's more like a macro. Yet, a good tester can use it to their advantage.
  • Sanity or Smoke Testing – If your AUT undergoes frequent builds with new bits or frequent builds with the same bits on different environments, automated tests can find configuration problems or regression problems. Building this type of automation library is more ambitious. And getting it to run unattended requires a seemly infinite amount of error handling. A good place to start is getting a few positive tests to navigate through some critical areas of the AUT, performing validations along the way.
  • A final answer to the question of what to automate is "nothing". Don't automate anything. On smaller, in-house or custom apps that generally run on the same environment and seldom update, it would be silly to invest any significant effort into writing automated tests.
Bj Rollison made a great post on the kinds of tests he automates. I like this post because it reminds me of the test complexities of shrink-wrapped software and humbles my own test efforts, which often feel quite overwhelming.

What?

Unless your team gets together for drinks after work, you need a better way to get to know each other than arguing over bugs. Bring in some board games or card games and invite your devs to play over lunch and watch the magic begin. Here we are playing R-Eco.


I'm lucky enough to work with Rob, a dev lead who happens to be a board game enthusiast, owning some 600+ games. Like most people, I felt a little awkward when Rob showed up at my cube 2 years ago and asked if I wanted to play a board game with some of the devs. We've been playing ever since and we've gotten to know each other far better than when we used to stay at our desks shooting each other in Unreal Tournament death matches.

The trick is to find the right game, that supports the right number of players, hits a sweet spot between luck and decision making, is easy to learn, and lasts about 30 to 45 minutes. Rob introduced us to Euro-Games, which tend to be shorter than American games and usually involve more decision making. My favorites are the ones that scale from about 6 players down to 2 so I can also play them at home with my wife.

Here are some of our favorites that fit into lunch breaks:
The more comfortable you and your team are with each other, the less time you will waste misunderstanding each other. Have fun!

I recently encountered a UI bug that was difficult to describe with words. Then I remembered Adam White's recent Test Technique report - BB Test Assistant blog post and realized this was the perfect bug to attach a repro video to.

If your company doesn't have a BBTest Assistant license, Microsoft's free Windows Media Encoder in its Windows Media Encoder 9 Series download has an awesome screen capture to video tool and a wizard that does all the setup for you. I've been having fun attaching videos to my bug reports and since they include even more info than still screen captures, they'll hopfully increase bug turn around.

Here's a sample video of a little MS Word bug James Whittaker describes in his book "How to Break Software". (The message indicating the index columns must be between 0 and 4 displays twice.)

As with manual tests, for each automated test we write, we must make a decision about where the test begins/ends and how much stuff it attempts to verify.

My previous post uses an example test of logging into an app then editing an account. My perfect manual test would stick with the main test goal and only verify the account edit. But if I were to automate this, I would verify both the log-in page and account edit. In fact, I would repeat the log-in page verification even if it occurred in every test. I do this for two reasons.

1. Automated verification is more efficient than manual verification.
2. Automated tests force extra steps that can’t determine bug workarounds.

In most cases, automated verification is more efficient than manual verification. Once the verification rules are programmed into the automated test, one no longer has to mentally determine whether said verification passes or fails. Sure, one can write just as many rules into a manual test, but it still takes a significant amount of time to visually check the UI. Worse yet, it takes a great deal of administrative work to record the results of manual verifications. So much time that I often get lazy and assume I will remember what I observed.

With manual tests, we can think on the fly and use our AUT knowledge to get the correct precondition state for each test. However, automated tests do not think on the fly. Thus, we have to ensure each automated test begins and ends in some known state (e.g., AUT is closed). This forces our automated tests to have a great deal more steps than our manual tests. An upstream bug may not have much impact on a test if a human finds a workaround. However, that same upstream bug will easily break an automated test if the test author did not plan for it. Thus, multiple verifications per test can help us debug our automated tests and easily spot upstream bugs (in both our AUT and our automated test library).

Most testers/devs agree that multiple problems should not be documented in the same bug report. Should similar logic apply to tests? Maybe multiple verifications that could fail should not be in the same test.

Let’s say each of our tests include multiple verifications. If Test#1 has an overall PASS result it tells us all things verified in Test#1 worked as expected. I’m okay with this. However, if it gets a FAIL result it tells us at least one thing did not work as expected. Anyone who sees this failed test does not really understand the problem unless they drill down into some type of documented result details. And how do we know when we can execute this test again?

The simpler our tests, the easier they are to write, and the less working things they depend on to execute. I'll use an exaggerated example. The following test verifies a user can log in and edit an account.

1. Log in. Expected: Log in page works.
2. Edit an account. Expected: Account is edited.

What is this test really interested in? What if the log in page is broken but you know a workaround to get you to the same logged in state? Can you test editing an account? And if it works should this test still pass?

My ideal manual test is structured as follows (I’ll discuss automated tests next week).

Do A. Expect B.

It has one action and one expected result. Everything else I need to know prior to my action is documented in my test’s Preconditions. This helps me focus on what I am actually trying to test. If I discover upstream bugs along the way, good, I’ll log them. But they need not force this specific test to fail. Get it? Have you thought about this? Am I missing anything?



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.