'Twas the night before prod release, when all through the build,
Not a unit test was failing, the developers were thrilled;

The release notes were emailed to users with care,
In hopes that new features, soon would be there;

The BA was nestled all snug in her chair,
With visions of magnitude ready to share;

And I on my QA box, trying not to be stressed,
Had just settled down for a last minute’s test;

When during my test there arose such a clatter,
I opened the error log to see what was the matter;

And what to my wondering eyes should appear,
But an unhandled fault, with its wording unclear;

When I showed it to dev, and he gave me a shrug,
I knew in a moment it must be a bug;

More rapid than eagles, dev’s cursing it came,
And he shouted at testers, and called us by name;

“Now, Jacobson! Now, Zacek! Now, Whiteside and Surapaneni!
On, Cagle! On, Addepalli, on Chang and Damidi!

Stop finding bugs in my web service call!
Now dash away! dash away! dash away all!"

And then, in a twinkling, I heard from the hall,
The tester who showed me, scripts can’t test it all;

As I rejected the build, and was turning around,
Into my cube, James Bach came with a bound;

He was dressed really plain, in a baseball-like cap,
And he patted my back for exploring my app;

He had a big white board and a little round belly,
That shook when he diagrammed like a bowlful of jelly;

He was chubby and plump, a right jolly old elf,
And he laughed when he saw RST on my shelf;

Then he spoke about testing, going straight to his work,
And attempted traspection, though he seemed like a jerk;

His eyes -- how they twinkled! his dice games, how merry!
He questioned and quizzed me and that part was scary!

He told me of lessons he taught at STARWEST,
Made an SBT charter and then told me to test;

Then I heard him exclaim, ere he walked out of sight
“Happy testing to all! ...just remember I’m right!”

Yesterday we had to roll back some production code to its previous build. It was essentially a performance problem. The new bits kept timing out for the types of requests users were making from the system.

Our poor, battered support team had to send out a user notice that said something like “Sorry users, for all the timeouts. We have just rolled back the code until we can fix the problems. Oh and by the way, that means you’ll no longer have these new features we just gave you.”

Shortly thereafter, the head of my department, indirectly asked me why these issues were not caught in testing. …This is the stuff nightmares are made of. I felt like throwing up, resigning, or crying. Once my heart started beating again, I realized I had logged a bug for these problems. Exhale now.

The bug was fairly accurate, stating the users would get timeout errors if they attempted anything beyond a specific threshold. During a bug meeting, it was down-graded in priority and the team decided to move forward with the release. In hindsight, there was a bit of group-think going on. We were so anxious to get the release out, we figured the users would put up with the performance problems. Boy were we wrong.

Being a tester is scary. At any given moment, all hell may break loose in production. And when it does, asking why the tester didn’t find it is, of course, the fairest of questions.

I brainstormed with my report dev and we came up with two valuable JART expansions.

The first is coded and has already found two bugs (and saved me from days of bordom). Comprehensive data checking in the report output is complicated but what about checking for data that is missing? My reports display the string “[Not Found]” when data fields get mapped incorrectly or other datasource problems occur. It only took about an hour to write a function that scans all report results looking for the string. Since JART already exported each report result to a text file, I simply automated Notepad’s Find functionalty to look for specific strings passed into my SearchReportResultsForString function. This was way easy. Now JART checks about 2240 distinct data columns across 113 reports for this issue.

I use said function to also look for the “No Data Found” string. If I find this string it means the report results returned but other than the cover page, no data matching the search criteria exists. This check gets reported as a “Warning” or inconclusive result. I use it to help me determine when I need to adjust my filter criteria parameters on each report.

The second expansion is to build a routine that saves off the report results of each iteration as “Iteration N Baselines”. Then, using the same report filter criteria and same data store, save off the next iteration’s report results as “Iteration N+1 Baselines”. Once JART has at least two iterations of baselines, JART will compare the files from each baseline to ensure nothing has changed. If I can pull this off, it will be HUGE. I’m expecting it to only support about 75% of my reports. The other 25% is time sensitive data that may not have a way to remain frozen in time.

Yesterday, a dev pointed out that the reports display the report execution date (i.e., the current date) in the header and footer. Thus, they should never match. Ooops! I think I can still work around it, but it does complicate things a bit.

My former manager and esteemed colleague asked me to teach a two hour class about Session Based Testing (SBT). We had tried SBT a couple years ago, when I was fresh out of Michael Bolton’s excellent Rapid Software Testing course.

I was nervous as hell about the class because most of the testers I work with were signed up and I knew this was an opportunity to either inspire some great testing or look like a fool. So I spent several weeks researching and practicing what I would teach. I decided an Exploratory Testing (ET) primer was necessary for this audience before SBT could be explained properly.

ET proved to be the most intimidating subject to explain. Most of what I found was explained by members of the Context-Driven School (e.g., James and Jon Bach). Nearly everything I found NOT explained by members of the Context-Driven School was heavily criticized (by members of the Context-Driven School) for not being true ET. With all this confusion over what ET actually is, one wonders how well the Context-Driven School has actually explained what they mean. I found various statements from videos, blogs, papers, and my RST courseware that ranged from...

  • It’s a technique…no it’s a method…no it’s a “way of testing”.
  • It’s the opposite of scripting…no, it can used with scripting too, even while automating.
  • All testers use ET to some extent…no wait, most testers aren’t using it because they don’t understand it.
After (hopefully) explaining ET, I was easily able to transition into SBT, making the case that SBT solves so many of the problems introduced by poorly conducted ET (e.g., lack of artifacts and organization). I explained the essential ingredients of SBT:
  • Time Boxing
  • Missions
  • Capturing Notes, Bugs, and Issues
  • Debriefing
Then I demonstrated my favorite SBT tools:
In the end, about half the audience appeared luke warm while the other half appeared skeptical to confused. I blame it on my own delivery. I think more light bulbs went off during the ET section. SBT takes a bit more investment in thought to get it.

For myself, however, the class was a success. Ever since my research, I’ve actually been using SBT and I love it! I also have some better ideas on how to teach it if I ever get a second chance. Special thanks to Michael Bolton and James Bach, who continue to influence my testing thoughts in more ways than anyone (other than myself).

“Yikes! The AUT just shut itself down… maybe I clicked the close button without realizing it. Oh well.”

“Rad! Awesome error! …I’ll worry about it later because I promised the devs I would finish testing the feature changes by close of business today.”

“Man, that’s a nasty error…I think I overheard some testers talking about a data refresh. I’m sure the data refresh is causing this error. I’ve got other things to worry about.”

“Dude, I keep seeing this annoying bug. Fortunately, I know the workaround. I’m sure another tester has already logged it. Moving on...”


Note to self,

Stop assuming system errors and strange behavior have already been logged or are due to some QA environment maintenance. What if everyone else is assuming the same thing? Problems get annoying, even for testers (shhhh, don't tell anyone). It feels great to ignore seemingly lesser problems in order to focus on seemly greater problems. But I am being paid to keep track of all problems. Nobody said it was easy.

(That's a picture of a stale hamburger bun. My wife hates when I store bread on top of the refrigerator)


Think about your AUT. Is it possible for a user to see something on their screen that has been altered since the last time their screen refreshed? If so, you’re in luck. You can execute some stale data tests that may be fruitful.


Think of an item that can be edited. Let's call it "ItemA".

  1. UserA opens the UI, and sees ItemA.
  2. UserB opens the UI, and modifys ItemA. (now UserA is looking at a stale item.)
  3. UserA attempts to modify ItemA.
Expected Results: UserA is not able to modify the stale version of ItemA. A user message explains this to the user and helps them figure out what to do next.


Much of this will depend on how locking is handled by your AUT. However, if you’re creative enough with various types of updates to ItemA, you’ll probably find some unhandled scenarios. For example: can you drag and drop ItemA? Can you delete it? Does ItemB reference a stale version of ItemA (mess with ItemB then)?

If you are able to find a bug with this technique, please share your test as a comment.

Happy testing!

"Agile"? Ohhhh, what is this "agile" stuff you speak of?

Look at the topics of most testing/dev conferences, webinars, blogs or tweets. Can you find the word “Agile” in there? I’ll bet you can. I was excited about it five years ago and I thought it would have a huge impact on my software testing challenges. It has not.

Testers still have to look at a chunk of software and figure out how to test it. This is still the most challenging activity we face everyday. When we find a problem, it doesn’t matter if you want us to log a bug, not close a story, stick it on the wall with a Post-It Note, or whisper it in a developer’s ear. The same testing must occur to find the problem. Everything else is what you do before or after the test.

The grass always looks greener on the other side of the fence. But once you hop the fence you’ll realize it is just as brown.

After reading Adam Goucher’s review of Malcolm Gladwell’s book, Blink, and hearing it recommended by other testers, I finally read it.

Some people (like Adam) are good at learning about testing by drawing parallels from non-testing material (like dirt bike magazines). I guess I’m not as good at this. Although, I did enjoy Blink, it certainly did not provide me with as many “ah ha!” testing moments as I’ve heard other testers suggest. I learned a bit about marketing, racism, and health care, but not too much about testing. And I felt like many of the stories and studies were things I already knew (sorry, I'm not being very humble).

In addition to Adam's test-related discoveries, here are a couple additional ones I scraped up:

  • Although, it was an awesome breakthrough in office chairs, and completely functional, people hated the Herman Miller Aeron Chairs. At first, the chairs didn’t sell. What did people hate? They hated the way they looked. People thought they looked flimsy and not very executive-like. After several cosmetic changes, people began accepting the chairs and now the chairs are hugely popular. Sadly, this is how users approach new software. No matter how efficient, they want the UI to look and feel a way they are familiar with. As testers, we may want to point out areas we think users will dislike. We can determine these by staying in touch with our own first time reactions.

  • Blink describes an experiment where in one case, customers at a grocery store were offered two samples of jam. In a second case, customers were offered about 10 samples of jam. Which case do you think sold more jam? The first case. When people are given too much information, it takes too much work for them to make decisions. What does this have to do with testing? From a usability standpoint, testers can identify functionality that may overload users with too many decisions at one time. The iPhone got this one right.
We always hear the complaint that testers who don't read books must not be any good at testing. For fear of falling into this category, I've recently read some other books that are actually about software testing. These books have not been as useful as the ideas I stumble upon, myself, while I'm in the trenches. But perhaps knowing these books are unsatisfying is helpful because I know there are no easy answers out there for the problems I face everyday.

A day went by without finding bugs in my AUT.

When I got home, as if desperate for bugs, I noticed one in the kitchen. I wanted to squash it but I know better. I controlled myself. I stood back and observed the bug (…an ant). I wondered how it got there. If one bug got in, there would probably be more. I noticed four more bugs over by the window. Ah hah! I’ll focus my efforts near the window. Perhaps I could draw out more bugs. I’ll give these bugs a reason to show up; several drops of tasty ant poison.



Ah yes, here they come, from under the window. Now that I know where they came from, I can patch the hole in the window to prevent further infestations from that oversight. In the meantime, these bugs will happily bring the poison back to their nest and will probably not return for awhile. Nevertheless, every so often, I will check.

Successful test automation is the elephant in the room for many testers. We all want to do it because manual testing is hard, our manager and devs would think we were bad-ass, and…oh yeah, some of us believe it would improve our AUT quality. We fantasize about triggering our automated test stack and going home, while the manual testers toil away. We would even let them kiss the tips of our fingers as we walked out the door.

…sounds good.

So we (testers) make an attempt at automation, exaggerate the success, then eventually feel like losers. We spend more time trying to get the darn thing to run unattended and stop flagging false bugs, while the quality of our tests takes a back seat and our available test time shrinks.

We were testing one product. Now we are testing two.

The two obvious problems are, 1.) Most of us are not developers. 2.) Writing a program to test another program is more difficult than writing the original program. ...Ah yes, a match made in heaven!

I watched an automated testing webinar last week. It was more honest than I expected. The claim was, to be successful at test automation the team should not expect existing testers to start automating tests. Instead, a new team of developers should be added to automate tests that testers” write. This new team would have their own requirement reviews, manage their own code base, and have their own testers to test their test automation stack. This does not sound cheap!

While watching this webinar, something occurred to me. Maybe we don’t need test automation. Why do I think this? Simple. Because somehow my team is managing to release successful software to the company without it. There is no test automation team on our payroll. Regression testing is spotty at best, yet somehow our team is considered a model of success within the company. How is this possible when every other test tool spam email or blog post I read makes some reference to test automation?

In my case, I believe a few things have made this possible:

  • The devs are talented and organized enough to minimize the amount of stuff they break with new builds. This makes regression testing less important for us testers.
  • The BAs are talented enough to understand how new features impact existing features.
  • The testers are talented enough to know where to look. And they work closely with devs and BAs to determine how stuff should work.
  • The user support team is highly accessible to users, knowledgeable about the AUT and the business, and works closely with the BAs/devs/testers to get the right prod bugs patched quickly. The entire team is committed to serving the users.
  • The users are sophisticated enough to communicate bug details and use workarounds when waiting on fixes. The users like us because we make their jobs easier. The users want us to succeed so we can keep making their jobs easier.
  • The possibility of prod bugs resulting in death, loss of customers, or other massive financial loss is slim to none.
I suspect a great deal of software teams are similar to mine. I'm interested in hearing from other software teams that do not depend on tester-driven test automation.

I do use test automation to help me with one of my simple AUTs which happens to lend itself to automated testing (see JART). However, from my experiences, there are few apps that are easy to automate with simple checks.

(these are taken from my real experiences over the past week or so)

You know you’re in trouble when…

  • Your dev says “I copied these statements from another developer. They’re too complex to explain.”
  • As you begin demoing strange AUT behavior to your dev, your dev drops a sharp F-bomb followed by a sigh.
  • You ask your dev what needs to be regression tested on account of their bug fix. They say “everything”.
  • After a migration you see an email from dev to DBA. The DBA responds “What are these delta scripts you speak of?”.
  • Your devs drop a prod patch at 5PM on a Friday as they all head home.
  • Dev says “Please try to repro the bug again, I didn’t do anything to fix it…I’m just hoping it got indirectly fixed”
  • Dev says “I marked the bug fixed but I have no way to test it.”
  • After a week of chasing and logging nasty intermittent bugs, you start seeing emails from your devs to your config managers saying stuff like “Why are these QA service endpoints still pointing to the old QA server?”
  • Your Config Manager says “Did you sanity test that patch I rolled out to prod when you were at lunch?”.
  • Your dev says “we don’t really care if the code we write is testable or not”.
  • Your bug gets rejected with the comment “It works on my box”.
What's on your list?

Test Manager: Remember, we're Software Testers, not some sorry-ass QA Analysts. We're elite. Let's act like it out there. Hoo-ah?

Testers: Hoo-ah!

You arm yourself with TestA and prepare to battle your AUT.

Take a deep breath, head into the AUT, and begin executing TestA. Prior to observing your expected results, you determine TestA is blocked from further execution (call it BlockageA). You make a note to investigate BlockageA later. You modify TestA slightly to give it a workaround in an attempt to avoid BlockageA. TestA encounters BlockageB. Now you decide to deal with BlockageB because you are out of workarounds. Is BlockageB a bug? You can’t find the specs related to BlockageB. After an hour, your BA finds the specs and you determine BlockageB is a bug (BugB). You check the bug DB to see if this bug has already been logged. You search the bug DB and find BugC, which is eerily similar but has different repro steps than your BugB. Not wanting to log dupe bugs you perform tests related to BugB and BugC to determine if they are the same. Finally you decide to log your new bug, BugB. One week later BugB gets rejected because it was “by design”; the BA forgot to update the feature but verbally discussed it with dev. Meanwhile, you log a bug for BlockageA and notice four other potential problems while doing so. These four potential problems are lost because you forgot to write a follow-up reminder note to yourself. Weeks later BlockageA is fixed. You somehow stayed organized enough to know TestA can finally be executed. You execute TestA and it fails. You log BugD. BugD is rejected because TestA’s feature got moved to a future build but dev forgot to tell you. Months later, TestA is up for execution again. TestA fails and you log BugE. The dev can’t repro BugE because their dev environment is inadequate for testing. Dev asks tester to repro BugE. BugE does not repro because you missed an important repro step. Now you are back at the beginning.

You’ve just experienced the "fog of test".

The "fog of test" is a term used to describe the level of ambiguity in situational awareness experienced by participants in testing operations. The term seeks to capture the uncertainty regarding own capability, AUT capability and stakeholder intent during an engagement or test cycle. (A little twist on the “Fog of war” Wikipedia entry)

Many (if not most) test teams claim to perform test case reviews. The value seems obvious, right? Make sure the tester does not miss anything important. I think this is the conventional wisdom. On my team, the review is performed by a Stakeholder, BA, or Dev.

Valuable? Sure. But how valuable compared to testing itself? Here are the problems I have with Test Case Reviews:

  • In order to have a test case review in the first place, one must have test cases. Sometimes I don’t have test cases…
  • In order for a non-tester to review my test cases, the test cases must contain extra detail meant to make the test meaningful to non-testers. IMO, detailed test cases are a huge waste of time, and often invaluable or misleading in the end.
  • From my experiences, the tests often suggested by non-testers are poorly designed tests or tests already covered by existing tests. This becomes incredibly awkward. If I argue or refuse to add said tests, I look bad. Thus, I often just go through the motions and pretend I executed the poorly conceived tests. This is bad too. Developers are the exception, here. In most cases, they get it.
  • Forcing me to formally review my test cases with others is demeaning. Aren’t I getting paid to know how to test something? When I execute or plan my tests, I question the oracles on my own. For the most part, I’m smart enough to know when I don’t understand how to test something. In those cases, I ask. Isn’t that what I’m being paid for?
  • Stakeholders, BAs, or Devs hate reading test cases. Zzzzzzzzzz. And I hate asking them to take time out of their busy days to read mine.
  • Test Case Reviews subtract from my available test time. If you’ve been reading my blog, you know my strong feelings on this. There are countless activities expected of testers that do not involve operating the product. This, I believe, is partly because testing effectiveness is so difficult to quantify. People would rather track something simple like, was the test case review completed? Yes or No.
I’m interested in knowing how many of you (testers) actually perform Test Case Reviews on a regular basis, and how you conduct the review itself.

Think of a bug…any bug. Call it BugA. Now try to think of other bugs that could be caused by BugA. Those other bugs are what I call “Follow-On Bugs”. Now forget about those other bugs. Instead, go find BugB.

I first heard Michael Hunter (AKA “Micahel”, “The Braidy Tester”) use the similar term, “Follow-on Failures”, in a blog post. Ever since, I’ve used the term “Follow-On Bugs”, though I never hear other testers discuss these. If I’m missing a better term for these, let me know. “Down-stream bugs” is not a bad term either.

Whatever we call these, I firmly believe a key to knowing which tests to execute in the current build, is to be aware of follow-on bugs. Don’t log them. The more knowledgeable you become about your AUT, the better you will identify follow-on bugs. If you’re not sure, ask your devs.
Good testers have more tests than time to execute them. Follow-on bugs may waste time. I share more detail about this in my testing new features faster post.

I’ve seen testers get into a zone where they keep logging follow-on bugs into the bug tracking system. This is fine if there are no other important tests left. However, I’ll bet there are. Bugs that will indirectly get fixed by other bugs mostly just create administrative work, which subtracts from our available time to code and test.

The Fantasy of Attending all Design and Feature Review Meetings

Most testers find themselves outnumbered by devs. In my case it’s about 10 to 1. (The preferred ratio is a tired discussion I’d like to avoid in this post.)

Instead, I would like to gripe about a problem I’ve noticed as I accumulate more projects to test. Assuming my ten devs are spread between five projects (or app modules), each dev must attend only the Feature Review/Design meetings for the project they are responsible for. However, the tester must attend all five. Do you see a problem here?

Let’s do the math for a 40 hour work week.

If each project’s Feature Review/Design meetings consume eight hours per week, each dev will have 32 hours left to write code. Each tester is left with ZERO hours to test code!

The above scenario is not that much of an exaggeration for my team. The tester has no choice but to skip some of these meetings just to squeeze in a little testing. The tester is expected to "stay in the know" about all projects (and how those projects integrate with each other), while the dev can often focus on a single project.

I think the above problem is an oversight of many managers. I doubt it gets noticed because the testers' time is being nickel and dimed away. Yet most testers and managers will tell you, “It’s a no-brainer! The tester should attend all design reviews and feature walkthroughs…testing should start as early as possible”. I agree. But it is an irrational expectation if you staff your team like this.

In a future post, I'll share my techniques for being a successful tester in the above environment. feel free to share yours.

Showstopper”. This means the show cannot go on. I guess the full metaphor is something like… the star of the show has lost her voice! The show must be stopped until her understudy has been located…or in our case, until the bug is fixed.

I’ve always hated the label “Showstopper”. I tried to convince my previous manager not to use it. I half-seriously argued it was a theater metaphor and theater metaphors don’t get used for other bug priorities. Well, if some insist upon using this theater metaphor, perhaps we should incorporate other theater metaphors into software testing and development.

  • Maybe we should classify our second priority bugs as “Technical Difficulties” (i.e., the light board blew a fuse but the stage crew determines they can use the house lights to keep the performance going…a workaround.)
  • The third priority bugs would be called “Missed Lines” (i.e., an actor forgot a line but the other actors easily improvise and no critical story essentials are missing.)
  • And finally, “Mediocre Set Design” (i.e., the set is barebones and unconvincing but with a little imagination, the audience can still enjoy the story.)
And why stop with just bug priorities...
  • Instead of the User Acceptance Test phase we should call it “Dress Rehearsal”.
  • Opening Night” is the night we deploy a new release to production.
  • When our users open Task Manager to force quit our app, they are “Cutting Out At Intermission”.
  • When the tester gets a perpetual hour-glass, the devs can say the feature got “Stage Fright”.
  • We can make our open bug list public and call it “Breaking the fourth wall”.
  • As CM kicks off the build, we’ll remind them to “Break a Leg”.
  • If our users ask for more features, we’ll bow and consider it a “Standing Ovation”.
  • And our dev who is always throwing in those extra features nobody asked for, they can be the team “Prima Donna” or “Divo”.
  • And finally, if our load testing was done poorly, we may end up with long lines of people waiting to use theater bathrooms. Some of these queues may get quite long. Eventually, people may wait so long they time out….er…"Pee Their Pants”.

So you’ve got 10 new features to test in about 25% of the time you asked for…just another day in the life of a tester. How do you approach this effort?

Here is how I approach it.

  • First, I sift through the 10 features and pick out the one that will have the most critical bugs (call it FeatureA). I test FeatureA and log two or three critical bugs.
  • Next, I drop FeatureA and repeat the above for the feature that will have the next most critical bugs (call it FeatureB). I know FeatureA has undiscovered bugs. But I also know FeatureA’s critical bug fixes will trigger FeatureA testing all over again. I also assume some non-discovered FeatureA bugs will be indirectly fixed by the first batch of bug fixes. I am careful not to waste time logging “follow-on-bugs”.
  • When bug fixes are released, I ignore them. I repeat the above until I have tested all 10 new Features with the first pass.
  • At this point something important has occurred. The devs and BAs know the general state of what they are most interested in.
  • Finally, I repeat the above with additional passes, verifying bug fixes with each feature. As the features gradually become verified I communicate this to the team by giving the features a status of “Verified”. I use my remaining time to dig deeper on the weak features.

Okay, nothing breakthrough here, but there are two tricks that should stand out in the above.

Trick 1 – Don’t spend too much time on individual features in the first pass. You want to provide the best info to your devs as early as possible for all 10 Features. It’s way too easy to run out of time by picking one Feature clean.

Trick 2 – Ignore those bug fixes until you get through your first pass with all 10 Features. I know it’s hard. You’re so anxious to see the damn thing fixed. However, IMO, the unknowns of untested Features are more valuable to chase down than the unknowns of whether bugs are fixed. In my experiences, when I log bugs well, verifying them is a brainless rubber-stamping-activity.

How do you get the job done?

An important bug got rejected by dev today. It was my fault.

I included an incorrect solution to the problem. Rather than describing the bug and calling it quits, I went further and described (what I believed to be) the right solution. The dev rejected it because my solution was flawed. The dev was correct…a bit lazy perhaps, but correct.

The main purpose of a bug is to identify a problem, not to specify a solution. I think it’s okay for testers to offer suggested solutions but they should be careful how they word the bug.

For example, if tester logs this…

Expected Results: A
Actual Results: B because D needs to be used to determine E, F, and G. Please modify the operation to use D when determining E, F, and G.


Dev may read it and think, modifying the operation to use D will not work…I’ll have to reject this bug. ….um, what about the problem?

A better bug would have been the following:

Expected Results: A
Actual Results: B


Let the dev figure out how to get to A. If you have repro steps and other suitable details, the dev will probably know how to fix it. If they don’t, they know who to ask for assistance. It may even be the tester!

Am I right?

After reading Tobias Mayer’s Test(osterone)-infected Developers, I noticed my test team has 3 men and 8 women, while my dev team has 30 men and 2 women. This is a small sample but I agree with Tobias that it is the norm.

Are she-testers better testers or just more interested in testing? This is a tired blogosphere discussion but a more interesting question is:


Do she-testers have unique skills worth harnessing?


My answer is, yes. I think women have at least one powerful advantage over men when it comes to testing. They are arguably better at observing subtle clues.

Most differences between men and women can be understood by noting their strongest biological roles. Women have babies! Thus, women are wired to pay attention to their babies and identify problems based on subtle expressions or behavior changes (e.g., baby is sick). I've heard women are better than men at determining if someone is lying, based on the same biological reasons.

Yesterday, while observing a premature field population UI bug, a she-tester on my team noticed the larger problem (that I missed). Previously populated data was getting erased. Of course, this may have just been a case of “two heads are better than one”, but my she-testers always impress with their subtle observations.

What differences have you observed between men and women testers? Can we use these differences to build a better test team?

Tuesday night I had the pleasure of dining with famed Canadian tester Adam Goucher at Figo Pasta in the Atlanta suburb of Vinings. Adam was in town for training and looking for other testers to meet. Joining us was soon-to-be-famed Marlena Compton, another Atlanta-based tester like myself (and long time caver friend of mine).

Like other testers from Toronto I have met (e.g., Michael Bolton, Adam White), Adam Goucher was inspirational, full of good ideas, fond of debate, and a real pleasure to talk to. I kick myself for not taking notes but I didn’t want to look like an A-hole across the table from him.

Here are some of last night’s discussions I enjoyed… (most of these are Adam's opinions or advises)


  • Determine what type of testing you are an expert on and teach it. He claims to be an expert on testing for international language compatibility (or something like that). He made me squirm attempting to tell him what I was an expert on...I'll have to work on this.

  • All testers should be able to read code.

  • Kanban flavor of Agile.

  • When asked about software testing career paths, he says think hard, decide which you prefer, helping other testers to test or executing tests on your own. He prefers the former.

  • A good test team lead should learn a little bit about everything that needs to be tested. This will help the team lead stay in touch with the team and provide backup support when a tester is out of the office.

  • Start a local tester club that meets every month over dinner and beer to discuss testing.

  • Pick some themes for your test blog (Adam’s is learning about testing through sports, and poor leadership is an impediment to better quality).

  • Join AST. Take the free training. Talk at CAST and embrace the arguments against your talk.

  • Tester politics. They exist. Adam experienced them first hand while working on his book.

  • Four schools of testing, who fits where? What do these schools tell us?

  • The latest happenings with James Bach and James Whittaker.

  • Rapid Software Testing training and how much it costs (I remember it being inexpensive and worth every penny).

  • Folklore-ish release to prod success stories (Flickr having some kind of record for releasing 56 regression tested builds to prod in one day).

  • He nearly convinced me that, my theory of successful continuous sustained regression testing being impossible with fixed software additions, was flawed. I’ll have to post it later.

  • Horses are expensive pets. (you’ll have to ask Adam about this)

  • He informed me that half of all doctors are less qualified than the top 50%.

  • Read test-related books (e.g., Blink, Practical Unit Testing or something…I should have taken notes. Sheesh, I guess I wasn't interested in reading the books. Shame on me. Maybe Adam will respond with his favorite test-related books).
  • The fastest way to renew your passport. Surely there were some missed test scenarios in Adam's all-night struggle to get to Atlanta.

I'm sure I forgot lots of juicy stuff, but that's what I remember now. Adam inspired me and I have several ideas to experiment with. I'll be posting on these in the future. Thanks, Adam!

To contrast my last post, I thought it would be fun to list when I feel good as a tester. See if you relate.

  • I ask my devs so many questions they begin discovering their own bugs during the interrogation.
  • A BA spends hours executing redundant tests to verify code I know is reused. I verify it with a simple test and move on to more important tests.
  • A persistent whiteboard discussion finally sifts out the worthless tests and the worth while tests present themselves.
  • A dev cannot test their own code after integration because they don’t understand how the application works in its entirety.
  • The project manager looks at me and asks which parts of the release are ready for prod. I have an answer prepared with tests and bugs to back me up.
  • I write tests in my head for non AUTs. When I can’t resist, I execute one and find a “bug in the wild”.
  • A BA sends me an email with about 20 repro steps. I determine 90% of their observations to be irrelevant and reduce it to two repro steps, the dev is grateful.
  • I write a program that provides valuable feedback, on the quality state of an AUT, after running unattended.
  • Another tester asks me for advice on how to test something complex.
  • During bug review meetings, the stakeholders bump most of my bugs up to “Showstoppers”.
  • I find a bug that leads me to accurately predict several more.
  • I execute tests that find problems, before the UI is developed.

What makes you feel good as a tester?

As a tester, while striving for the impossible goal of perfect software, I sometimes feel stupid. How valuable am I to the team? Do I really have any hard skills different than the next guy? Am I a testing failure?

I feel stupid when…

  • production bugs have to be patched (the kind I should have caught).
  • devs talk about code or architecture I don’t understand.
  • non-testers log bugs.
  • I have to execute brainless tests that the guy on the street could execute.
  • I can’t remember if I tested a certain scenario and my executed test documentation is incomplete.
  • the team celebrates individual dev accomplishments for feature sets and QA is not recognized.
  • my bug is rejected by dev for a legitimate reason.
  • I read a software testing blog post about some tester with 95% of her tests automated.

As a fellow tester, maybe you have felt stupid at times too. Feeling stupid is not fun and eventually will lead to disliking your job. I guess there are two solutions; 1.) find a new job or 2.) try not to feel stupid.

I talk my way out of feeling stupid as a tester the same way I do outside of work during conversations with doctors, physicists, CEOs or other potentially intimidating experts of some field. I remember that everyone is an expert at something…just something different. In the examination room, the doctor may be the expert at prescribing the treatment, but put the doctor and me at the bottom of a 300-foot-deep pit in a wet cave, and suddenly the doctor is asking me for help (I’m a caver).

When it comes to testing, we don’t know the same things the developers or BAs know but we shouldn’t feel stupid about it. It doesn’t mean we should stop learning, we just need to put things in perspective instead of feeling inadequate. Faking your knowledge is way worse than saying “I don’t know”.

Don’t second guess your skills as a tester.

In a future post, I'll tell you when I feel awseome as a tester.


“Added a validation in the service codes to make sure either "DSP" or "Already in GE" is true for Accession.”


Do your devs write bug resolution comments? If not, ask for them. All bug tracking systems have a spot for devs to write notes about what they did to fix the bug. It’s tough to get devs to write their resolution but it’s well worth the pain.

When you verify fixed bugs you should be asking yourself what regression testing is necessary; what did this bug fix break? If you don’t know how the dev fixed it, you don’t really know what could have broken as a result. Sometimes, devs are cool enough to provide tips of where other bugs may be lurking…


“This problem is bigger than the shoot value hiding. In fact, the entire preCat is not reloading after a save which is hiding other bugs (UI refresh and service layer issues.)”


Some of my devs are terrible at writing resolution comments. Sometimes I have no idea what they did to fix it. My emotions tell me to Reopen the bug. But after listening to my brain, and asking the dev what they did, I usually discover an unexpected fix that still solves the problem. Dev comments would have been nice.

You may also discover your devs share some of your angst when dealing with scatterbrain stakeholders, as evident in comments I’ve recently seen such as the following:


“UI now allows multiple preCats and DSP check box, even though earlier requirements conflict with both of these.”

“Yet another requirements change. The need dsp check box has always been there. It was expressed that it should only be visible when there IS and accession, but I guess we are changing that.”


Poor dev guy. I feel your pain. I miss the good old pre-agile days too.

One of my new AUTs has stakeholders who are obsessed with cosmetics. Despite having an AUT full of business processes gaps and showstopper bugs, during stakeholder meetings their first priority is to rattle off a list of all the cosmetic things they want changed. For example:

  • titles should left align
  • read-only fields should be borderless
  • certain fields should be bigger/smaller
  • less white space
  • no scroll bars
  • don’t like text color or font
  • buttons should be same width regardless of their names

Theoretically, Agile is supposed to address this kind of perpetual scope creep. But I hate it because even after listening to the stakeholders, it still becomes awkward for the dev to code and the tester to verify.

Something truly lacking in custom in-house (not shrink-wrap) apps is the ability for users to customize UIs until they’ve bored themselves to death. I’ve never been one to bother to “change skins” on my apps or even to change my desktop background. But cosmetics is a major concern for some users. Forcing me to test someone’s notion of what they think looks good on a UI is not interesting to me, as a tester. Let’s write software that lets users make their own cosmetic changes on their own time. I’ll test its engine. That sounds interesting.

JART found a bug this morning. But it wasn't in my AUT.

JART had been happily smoke testing our pre-production environment this morning for an hour. I was eagerly awaiting the results for a group of anxious managers. After seeing QTP’s auto-generated test results consistently for 144 previous test runs, QTP suddenly decided to give me this instead of the results:



I didn’t change any versions of any software on this box, of course. After waiting another hour while JART repeated all the tests, the next results file was fine. …Annoying.

We’ve been spinning our wheels investigating a prod bug that corrupted some data yesterday. Once we cracked it, we realized the bug had been found and fixed more than a year ago. …Depressing. My first thought? Why didn’t I catch this when it broke?

Perfecting regression testing is a seemingly impossible task. Some of you are thinking, “just use test automation...that's what they said in that Agile webinar I just attended”. If my team had the bandwidth to automate every test case and bug we conceived of, the automation stack would require an even larger team to maintain. And it would need its own dedicated test team to ensure it properly executed all tests.

It’s even more frustrating if you remove the option of automated regression testing. Each test cycle would need to increase by the same amount of time it took to test the new features in the last build, right? So if iteration 4 is a two week iteration, and I spend a week testing new features. That means, iteration 5 needs to be a three week iteration; I’ll need that extra week so I can run all the iteration 4 tests again. They’ll give me eight weeks to test iteration 10, right?

Wrong? You mean I have the same amount of test time each iteration, even though the amount of tests I have to execute are significantly increasing? This is a reality that somehow we all deal with.

Obviously, none of us have "perfect" regression testing. The goal is probably "good enough" but the notion of improving it is probably driving you crazy, as it is me. This topic is glossed over so much, I wonder how many testers have an effective strategy.

What is your regression test strategy?

During a recent phone call with Adam White, he said something I can’t stop thinking about. Adam recently took his test team through an exercise to track how much of their day was actually spent testing. The results were scary. Then Adam said it, “If you’re not operating the product, you’re not testing”…I can’t get that out of my head.

Each day I find myself falling behind on tests I wanted to execute. Then I typically fulfill one of the following obligations:

  • Requirement walkthrough meetings
  • System design meetings
  • Write test cases
  • Test case review meetings
  • Creating test data and preparing for a test
  • Troubleshooting build issues
  • Writing detailed bug reports
  • Bug review meetings
  • Meetings with devs b/c tester doesn’t understand implementation
  • Meetings with devs b/c developer doesn’t understand bug
  • Meetings with business b/c requirement gaps are discovered
  • Collecting and report quality metrics
  • Managing official tickets to push bits between various environments and satisfy SOX compliancy
  • Update status and other values of tested requirements, test case, and bug entities
  • Attempt to capture executed exploratory tests
  • Responding to important emails (arriving multiple per minute)

Nope, I don’t see "testing" anywhere in that list. Testing is what I attempt to squeeze in everyday between this other stuff. I want to change this. Any suggestions? Can anyone relate?


If your test automation doesn’t verify anything useful, it is essentially worthless. First, there are some basic tests I decided to programmatically verify with JART. These are tests that often fail during manual testing.

  • Can I access the report app for a given environment?
  • Does a working link to each report exist?
  • Does each report’s filter page display?
  • Does each report’s filter page display the expected filter controls I care about?

The above can be verified without even executing any reports. Piece of cake!

Next, I need to verify each report executes with some flavor of expected results. Now I’m bumping it up a notch. There are an unlimited amount of results I can expect for each report and these all require knowledge or control of complex reportable business data. This also means I have to examine the report results, right? My AUT uses MS ActiveReports and displays results in an object not recognized by QuickTest Pro. According to the good folks at SQA Forums, the standard way to extract info from the results is to use the AcrobatReaderSDK, which I don’t have. The workaround, which I use, is to install a free app that converts pdf files to text files. I wrote a little procedure to save my report results as pdf files, then convert them to text files, which I can examine programmatically via QuickTest Pro. So far, it works great. The only disadvantage is the extra 5 seconds per report conversion.

So what am I examining in the report results for my verifications? So far, I am just looking at each report’s cover page, which displays the specified filter criteria returned, along with its filter name (e.g., “Start Date = 3/20/2006”). If it returns as expected, I have verified the AUT’s UI is passing the correct filter parameters to the report services. This has been a significant failure point in the past, which is no surprise because the UI devs and service devs are poor communicators with each other.

Currently, JART verifys 59 Reports and up to 9 filters on each. It takes about 1 hour to complete. JART is ready to perform my next sanity test when we go live. So far I have put in about 24 hours of JART development.

I’ll discuss the simple error handling JART uses in a future post.

Note: The failures from the test run result summary above were the results of QuickTest Pro not finding the text file containing the converted report results. I couldn’t repro this JART error but now I may have to invest time researching the fluke and determining how to handle it. This is time not spent testing my AUT.

An early decision I had to make was whether I should programmatically determine which reports were available to test and programmatically determine which of their parameters were required, etc… or if I should track my own expectations for the reports I expected and the parameters owned by each. I went with the later because I don’t trust the devs to keep the former stable. JART needs the ability to determine when the wrong reports get released; a common dev mistake.

Since I have about 150 distinct reports, each with their own combinations of shared filter controls and possible filter values, I made a matrix in MS Excel. The matrix rows represent each report, the columns represent each filter control, and the intersections are the filter values I use to pass into JART for each report’s filter criteria controls. This single spreadsheet controls all the tests JART will execute.

Another advantage, for me, to controlling the tests via an Excel spreadsheet is that my BL already maintains an Excel spreadsheet that specifies which of the 150 reports should be available in each build. The BL’s list can control which reports JART tests, just like the BL's list controlled which reports I tested.

JART simply loops through each report in said matrix and provides standard verifications for each. Verifications are important, and tricky for report AUTs, so I’ll save those for the next post.

It’s true what they say; writing automated tests is waaaay more fun than manual testing. Unfortunately, fun does not always translate into value for your team.

After attempting to automate an AUT for several years, I eventually came to the conclusion that it was not the best use of my time. My test team resources, skills, AUT design and complexity, available tools, and UI-heavy WinForm AUT were a poor mix for automated testing. In the end, I had developed a decent framework, but it consisted of only 28 tests that never found bugs and broke every other week.

Recent problems with one of my new AUT’s have motivated me to write a custom automated test framework and give the whole automated test thing another whirl.

This new AUT has about 50 reports, each with various filters. I’m seeing a trend where the devs break various reports with every release. Regression testing is as tedious as it gets (completely brainless; perfect to automate) and the devs are gearing up to release another 70 additional reports! …Gulp.

In this case, several aspects are pointing towards automated test potential.

  • The UI is web-based (easier to hook into)
  • The basic executed test is ripe for a data-driven automation framework; crawl through 120 reports and perform nearly the same actions and verifications on each.
  • Most broken report errors (I’m targeting) are objectively easy to identify; a big old nasty error displays.

I wrote the proof of concept framework last week and am trying to nail down some key decisions (e.g., passing in report parameters vs. programmatically determining them). My team needs me to keep testing, so I can only work on automation during my own time…so it’s slow going.

This is my kick-off post. I’ll explain more details in future posts. More importantly, I’ll tell you if it actually adds enough value to justify the time and maintenance it will take. And I promise not to sugar coat my answer, unlike some test automation folks do, IMO.

Oh, I’m calling it JART (Jacobson’s Automated Report Tester). Apparently JART is also an acronym for "Just a Real Tragedy. We’ll see.

During last fall’s STPCon, I attended a session about showing your team the value of testing. It was presented by a guy from Keen Consultants. He showed us countless graphs and charts we could use to communicate the value of testing to the rest of our team. Boring…zzzzzzzz.

In the spirit of my previous post, Can You Judge a Tester by Their Bug List Size?, here is a more creative approach, that is way simpler and IMO more effective, at communicating your value as a tester….wear it!

(I blurred out my AUT name)

You could change it up with the number of tests you executed, if that sounds more impressive to you. Be sure to wear your shirt on a day the users are learning your AUT. That way, you can pop into the training room and introduce yourself to your users. Most of them didn’t even know you existed. They will love you!

Now I just need to come up with an easy way to increase the bug count on my shirts (e.g., velcro numbers). Because, like all good testers know, the shirt is out-dated within an hour or so.

Users change their minds. They save new items in your app, then want to delete those saved items and try again. Does your AUT support this behavior? Did you test for it?

My devs recently added a dropdown control with two values (i.e., DVS or ESP). Soon after being able to save or change the values, I stopped testing. Later, a user pointed out there is no way to remove the values from that (optional) field if you change your mind. Now, many of our dropdowns look like this:


While testing, I often look for data saving triggers (e.g., a button that says “Save”). Then I ask myself, "Okay, I saved it by mistake, now what?".
Devs and BAs are good at designing the positive paths through your AUTs. But they often overlook the paths needed to support users who change their minds or make mistakes. Your AUT should allow users to correct their mistakes. If not, your team will get stuck writing DB scripts to correct production data for scenarios users could not fix on their own. It’s your job to show your team where these weaknesses are.

In one of James Whittaker’s recent webinars , he mentioned his disappointment when tester folks brag about bug quantities. It has been popular, lately, to not judge tester skills based on bug count. I disagree.

Last Monday night I had a rare sense of tester accomplishment. After putting in a 14 hour day, I had logged 32, mostly showstopper, bugs; a personal record. I felt great! I couldn’t wait to hear the team’s reaction the next day. Am I allowed to feel awesome? Did I really accomplish anything? Damn right I did. I found many of those bugs by refining my techniques throughout the day, as I become familiar with that dev’s mistakes. I earned my pride and felt like I made a difference.

But is it fair to compare the logged bug list of two testers to determine which tester is better? I think it is...over time. Poor testers can hide on a team because there are so few metrics to determine their effectiveness. Bug counts are the simplest metric and I think it’s okay to use them sometimes.

I work with testers with varying skills and I see a direct correlation. When a tester completes an entire day of work without having logged a single bug, I see a problem. The fact is, one logged bug proves at least some testing took place. No logged bugs could mean the AUT is rock solid. But it could also mean the tester was more interested in their Facebook account that day.

“If it ain’t broke, you’re not trying hard enough.”

This silly little cliché actually has some truth. When I test something new, I start out gentle, running the happy tests and following the scripted paths…the scenarios everybody discussed. If the AUT holds up, I bump it up a notch. And so on, until the bugs start to shake loose. That’s when testing gets fun. Logging all the stupid brainless happy path bugs is just busy work to get to the fun stuff. (Sorry, a little off subject there)

Anyway, from one tester to another, don’t be afraid to celebrate your bug counts and flaunt them in front of your fellow testers…especially if it makes you feel good.

BTW - Does anyone else keep a record of most bugs logged in a day? Can you beat mine? Oh, and none of my 32 got rejected. :)

Most bug readers will agree, a simple “Expected Results” vs. “Actual Results” statement in the bug description will remove all ambiguity. But what about the bug title? Is a bug title supposed to include the expected results, actual results, or both? Every time I log a bug, I pause to consider the various title possibilities. I want a unique but concise title that will summarize the bug, making the description as unnecessary as possible. My head swims with choices…

  • If user does X after midnight, user gets Y.
  • If user does X after midnight, user should not get Y.
  • If user does X after midnight, user should get Z.
  • If user does X after midnight, user gets Y instead of ZUser should get Z when they do X after midnight.
  • etc…

Here is what I think.

There is no “best” bug title. Any bug title is good enough as long as it:

  • is unique within my bug tracking system
  • describes the problem to some extent
  • includes key words (these words will be used someday to find this bug; searching for bugs where title contains some text)

So unless someone convinces me otherwise, with a comment on this post, I have decided to just use the first distinct bug title popping into my mind and stop worrying about bug title perfection.

Roshni Prince asks,

Can you suggest some tips to deal with testers that attempt to dig into code and fix their own bugs?

I like the question, so I’ll tell you what I think.

If we had unlimited test time, SmartyPantsTester would rock! However, if we have to provide as much information as possible, about the current AUT quality, in a limited time, SmartyPantsTester gets in our way.

So how do we deal with SmartyPantsTester?

The Carrot Approach:

Convince SmartyPantsTester the real hero is the tester who can tell us something meaningful about the AUT quality. Can anyone help us…Please? We need someone smart enough to find the weak points in our AUT? We need someone familiar enough with the business to tell us if FeatureA will solve the user’s problems. Is anyone creative enough to determine how to test the feature the devs said was impossible to test? Is anyone methodical enough to determine the repro steps to this intermittent problem? We need someone brave enough to QA Certify this AUT for production. Get it? Appeal to the ego.

The Stick Approach:

Ask SmartyPantsTester to work extra hours until she can answer questions like the following:

  • Cool, you found an incorrect join statement, how does the rest of the AUT look?
  • Do the new features work properly?
  • How much of the AUT have you tested?
  • How many tests have you executed?
  • How many showstopper bugs have you logged?
  • In your opinion, is the AUT ready to ship?

And once again, I find myself with the same conclusion; there is simply too much to test in the available time. Testers reduce their chances of success by trying to do the devs’ job too.

Should you log a bug without knowing the expected results?

Someone chuckled at my “?” for the expected results, during a bug review meeting. The bug (like many of my bugs) looked like this:

Repro Steps:
1. do something
2. do something else
3. do something else

Expected Results: ?
Actual Results: an unhandled error is thrown

Good testers determine scenarios that nobody thought of. That is one skill that makes us good testers. Some time ago, I didn’t let myself log bugs until I tracked down the proper oracles and determined the expected results; a practice that sounds good…until you try it. Unfortunately, the oracles are never quick to respond. Often they pose questions to the users, which open additional discussions, meetings, etc…until the tester forgets the details of their wonderful test.

So these days, if I encounter a bug and I don’t know the expected results, I log the bug immediately and let someone else worry about expected results…if they even care. It’s not because I’m too lazy to seek out my oracles. It’s because my time is better spent logging more bugs! When in doubt, remember, the tester’s primary job is to provide information about the AUT. It’s not the responsibility of the tester to determine expected results. If the tester identifies a scenario that will crash the AUT, they should log the bug now.

I’m surprised by how many people still send around bmp files of their entire desktop when they are only interested in showing some small error message displaying in a little window. They are using the [Print Screen] key. Some, at least know they can use [Alt]/[Print Screen] to capture only the active window.

Others prefer to capture only the area the audience needs to understand; they may use a screen capture app. I’ve been using Wisdom-soft’s free ScreenHunter . I’ve got it customized to capture the area within a rectangle I draw, after pressing F6. After drawing my rectangle, its contents capture as an auto-named gif file and a clipboard item.

Screen-capture-type-stuff I think about :

  • I try to avoid screen capturing error messages, opting instead to capture the error message in text format, from an error log. That way the dev can see the whole message and copy the text if they want to search the code or something. If the devs don't log the error, they're stuck with a screen capture.
  • If the screen capture needs other context (e.g., which programs are running in my tray, what time is it) I still capture the entire desktop.
  • Occasionally I mark up the screen capture (in Paint.NET) to circle something or add other annotations.
  • If capturing action is better, I capture video.
  • Sometimes I save time by using a screen capture to support repro steps. Example: Capture a filter page for a report and write a repro step that says "specify all filter criteria as depicted in the screen capture".

What (free) screen capture program do you use? What screen capture tips did I miss?

Chances are, your AUT probably has buttons on the UI somewhere. If so, those buttons trigger actions. A common oversight is to not handle multiple actions being triggered at nearly the same time.

Testers and devs are familiar with standard UI controls. We know buttons don’t generally require double-clicks. However, many users don’t have this instinct. These users double-click everything, even buttons. Become one of them!

My AUT had a bug that actually allowed users to rapid-fire-click a generate invoice button, and get duplicate invoices. Ah…yikes.

Here is the bread and butter test:
Get your mouse over a button that triggers an action. Get your finger ready. Click that button multiple times as quickly as you can. Now go look in the DB, error logs, or wherever you need to look to determine if multiple actions where triggered inappropriately. No bug? Try it a few more times. Or try getting the focus on the button and using a rapid-fire [Enter] key.

Got any variations on this?

The more one learns about the inner workings of an AUT, the more one may get distracted from logging important bugs.

I saw this happen last week with a fellow teammate. She discovered a search criteria control that didn’t allow the user to search by certain values (e.g., they could search by A, B, D, but not C). Instead of logging a bug, she explained to me why it didn’t work. She was thrilled with her knowledge of countless dev discussions trying to fix the data structure beneath said search control. It was more exciting for her to explain the complex data than to worry about the little users who would not be able to search per their expectations. It was like she was suddenly on the side of the developers…and the users would never understand the complex challenges of making software work. “It’s not a bug, nobody can figure out how to deal with the complex data”.

Huh?

Dumb testers don’t have this problem. If it doesn’t follow the spec, they log it. And that’s the good thing about dumb testers. Be careful with your knowledge.

Alex and Joe,

I agree with both your comments to my last post.

Joe blames the dev…

“- No bug report- [dev] corrected it on their own

Obviously devs catch defects in their code (logic errors, missed specs, inefficiencies). If devs find and resolve defects in previously released code, should a bug be logged? On one hand, if the dev corrects it before anyone else catches it, why should the dev have to tell anyone? If a tree falls in the forest and nobody is around, does it make a sound? On the other hand, we can’t assume an unnoticed defect in the field, means said defect’s resolution will also be unnoticed. Thus, IMO it’s best for devs to let the team know about all changes. Typical ways to do this:

  • Dev logs bug

  • Dev tells tester to log bug

  • Dev logs a work item representing work they did (e.g., refactored code for FeatureA). The work item should be visible to the team. The dev gets credit for their wonderful coding, and the tester gets the tap on the shoulder to attempt some testing.

Alex blames the tester…

That test should have failed a long time ago.

In this case, the feature did not match its spec. You're damn right, the tester should have caught this! The tester should thank their dev for doing the testers job. (Unfortunately, if the tester relied on the spec, we would be right back where we started.)

So a third member must also take the blame; the spec author (e.g., Business Analyst), who did not capture the true business need of the feature. However, we can’t put all the blame on them. “The requirements sucked!” …this is a tired argument, one that I decided never to waste my time with. Anyone who disagrees probably has not attempted to write requirements.

The next time you feel compelled to point the finger at a teammate, don’t. Responsibilities blur across the software development trades and the whole team can share success or failure.

(sorry, this post is a little dull. I’ll try for something juicier next week)

My team just released a new build to prod that included a freebie from a dev. I say "freebie" because the dev noticed code not complying to a requirement and corrected it on their own. There was no bug logged.

It turns out, our users preferred the feature prior to the dev's fix. We are now in the process of rushing through a “showstopper” production patch to insert the previous code back into prod for business critical reasons.

What went wrong?

Testers are weird. When they find a bug in their AUT they feel good. But when it comes to integration testing, testers feel a sense of defeat when a bug is found to be in the AUT they are responsible for, rather than the AUT the other tester is responsible for. Can you relate? Stay with me.

When discussing integration testing observations, devs, testers, and business people say things like…

we do this”
“then we do that”

…to describing the application they associate themselves with. And if the discussion includes an external application, folks start saying things like:

“when they send us this, we send them that”
they create the XML file and we import it”

I hate when people use subjective personal pronouns instead of proper names. For one thing, it would be easier to understand a sentence like “Application_A sends the file to Application_B”, than a sentence like “They send us the file”. Afterall, they are called subjective personal pronouns. But the other reason I hate this way of communicating is that it reinforces an unhealthy sense of pride. People connect themselves, too intimately, with the application they refer to as “we”. People are biased towards the group they belong to. They start to build a bubble around their app; “It’s not our bug, it’s their bug”.

My little language tip may seem trivial, but think about it the next time you discuss system integration. If you resist the urge to use subjective personal pronouns, I think better communication will occur and your ego will be less likely to distract from effective teamwork.

“Test early” has been banged into my head so much, it has ruined me.

I just started testing a new app with business language/processes that I am unfamiliar with. While waiting on a UI, I began testing the services. I also selected an automation framework and tried to write some functions to leverage later via an automation library. At first, I was proud of myself for testing early. I did not have to make decisions about what to test because there was so little to test, I could just test it all! How nice. Things would soon change however.

As I sat through the domain walkthroughs, I realized I was learning very little about the complex functionality that was coming. I didn’t know which questions to ask because each business process was an enigma. The more I hid my confusion, the less valuable I felt, and the less I knew which tests to execute.

Finally, I broke out of my bubble and set up a meeting with the primary business oracle. Knowing close to nothing about the business side of the app, I asked the oracle one simple question:

“Can you walk me through the most typical workflow?”

She did. And even if only 10% of what she explained made sense, it became my knowledge base. Later, I could ask a question related to the 10% I understood. If I understood the answer, now I understood 11% of the app. And so on. Knowledge leads to confidence. Confidence leads to testing the right stuff.

So don’t get wrapped up in all the fancy "test early" stuff that makes for impressive hallway discussions. Start with the simple, low-tech approach of learning what your AUT is supposed to do.

Being technical <> being valuable.

Chances are, your AUT has some items that can be deactivated somewhere; probably in an admin screen. This is a great place to catch some serious bugs before they go to prod. Here are a few tests you should execute.

Start with the easy ones:

1. Make ItemA “in use” (something in your AUT depends on itemA).
2. Attempt to deactivate ItemA.
Expected Results: ItemA cannot be deactivated. User communication indicates ItemA is in use.

1. Deactivate an unused item (call it ItemA).
2. Attempt to use ItemA somewhere (e.g., does ItemA display in a dropdown menu?).
Expected Results: ItemA cannot be used because it is unavailable.

Then try something more aggressive:

1. UserA opens a UI control that displays ItemA as a potential selection.
2. UserB deactivates ItemA (e.g., from an admin screen).
3. UserA selects ItemA from the UI control.
Expected Results: ItemA cannot be used by UserA because it is unavailable. Communication to UserA explains ItemA is inactive.

Got any good variations?

Just before the holidays, we went live with another relatively huge chunk of users who require slightly different features than our previous users. The bug DB is quickly filling up with bugs discovered in production. These bugs are logged by the business/support arm of our team because testers can’t keep up. Many of the bugs don’t have repro steps and appear to be related to multiple users, performance, deadlocking, or misunderstood features. Other bugs are straight forward; oversights uncovered after users take the app through new paths for the first time.

My team is struggling to patch critical production issues to keep the users working through their deadlines. I want to investigate every new bug to determine the repro steps and prepare for verifying their fixes. Instead, I’m jumping from one patch to the next, attempting to certify the patches for production. Keeping up is difficult. New emails arrive every few seconds.

This is an awkward phase, but the team is reacting well, maintaining a good reputation for quick fixes. Nevertheless, I’m stressed.

Am I doing something wrong?
Do I suck for letting these bugs get to prod in the first place?
Should I be working late every night to clean up the bug DB?
Am I the bottleneck, too slow at getting patches to users?
Should I certify patches that are only partially fixed?
Should I be writing new tests to verify these bugs and prevent them from returning?
Should I be out in the trenches, watching the user behavior?

Can you relate?



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.