During a recent phone call with Adam White, he said something I can’t stop thinking about. Adam recently took his test team through an exercise to track how much of their day was actually spent testing. The results were scary. Then Adam said it, “If you’re not operating the product, you’re not testing”…I can’t get that out of my head.

Each day I find myself falling behind on tests I wanted to execute. Then I typically fulfill one of the following obligations:

  • Requirement walkthrough meetings
  • System design meetings
  • Write test cases
  • Test case review meetings
  • Creating test data and preparing for a test
  • Troubleshooting build issues
  • Writing detailed bug reports
  • Bug review meetings
  • Meetings with devs b/c tester doesn’t understand implementation
  • Meetings with devs b/c developer doesn’t understand bug
  • Meetings with business b/c requirement gaps are discovered
  • Collecting and report quality metrics
  • Managing official tickets to push bits between various environments and satisfy SOX compliancy
  • Update status and other values of tested requirements, test case, and bug entities
  • Attempt to capture executed exploratory tests
  • Responding to important emails (arriving multiple per minute)

Nope, I don’t see "testing" anywhere in that list. Testing is what I attempt to squeeze in everyday between this other stuff. I want to change this. Any suggestions? Can anyone relate?


If your test automation doesn’t verify anything useful, it is essentially worthless. First, there are some basic tests I decided to programmatically verify with JART. These are tests that often fail during manual testing.

  • Can I access the report app for a given environment?
  • Does a working link to each report exist?
  • Does each report’s filter page display?
  • Does each report’s filter page display the expected filter controls I care about?

The above can be verified without even executing any reports. Piece of cake!

Next, I need to verify each report executes with some flavor of expected results. Now I’m bumping it up a notch. There are an unlimited amount of results I can expect for each report and these all require knowledge or control of complex reportable business data. This also means I have to examine the report results, right? My AUT uses MS ActiveReports and displays results in an object not recognized by QuickTest Pro. According to the good folks at SQA Forums, the standard way to extract info from the results is to use the AcrobatReaderSDK, which I don’t have. The workaround, which I use, is to install a free app that converts pdf files to text files. I wrote a little procedure to save my report results as pdf files, then convert them to text files, which I can examine programmatically via QuickTest Pro. So far, it works great. The only disadvantage is the extra 5 seconds per report conversion.

So what am I examining in the report results for my verifications? So far, I am just looking at each report’s cover page, which displays the specified filter criteria returned, along with its filter name (e.g., “Start Date = 3/20/2006”). If it returns as expected, I have verified the AUT’s UI is passing the correct filter parameters to the report services. This has been a significant failure point in the past, which is no surprise because the UI devs and service devs are poor communicators with each other.

Currently, JART verifys 59 Reports and up to 9 filters on each. It takes about 1 hour to complete. JART is ready to perform my next sanity test when we go live. So far I have put in about 24 hours of JART development.

I’ll discuss the simple error handling JART uses in a future post.

Note: The failures from the test run result summary above were the results of QuickTest Pro not finding the text file containing the converted report results. I couldn’t repro this JART error but now I may have to invest time researching the fluke and determining how to handle it. This is time not spent testing my AUT.

An early decision I had to make was whether I should programmatically determine which reports were available to test and programmatically determine which of their parameters were required, etc… or if I should track my own expectations for the reports I expected and the parameters owned by each. I went with the later because I don’t trust the devs to keep the former stable. JART needs the ability to determine when the wrong reports get released; a common dev mistake.

Since I have about 150 distinct reports, each with their own combinations of shared filter controls and possible filter values, I made a matrix in MS Excel. The matrix rows represent each report, the columns represent each filter control, and the intersections are the filter values I use to pass into JART for each report’s filter criteria controls. This single spreadsheet controls all the tests JART will execute.

Another advantage, for me, to controlling the tests via an Excel spreadsheet is that my BL already maintains an Excel spreadsheet that specifies which of the 150 reports should be available in each build. The BL’s list can control which reports JART tests, just like the BL's list controlled which reports I tested.

JART simply loops through each report in said matrix and provides standard verifications for each. Verifications are important, and tricky for report AUTs, so I’ll save those for the next post.



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.