(these are taken from my real experiences over the past week or so)

You know you’re in trouble when…

  • Your dev says “I copied these statements from another developer. They’re too complex to explain.”
  • As you begin demoing strange AUT behavior to your dev, your dev drops a sharp F-bomb followed by a sigh.
  • You ask your dev what needs to be regression tested on account of their bug fix. They say “everything”.
  • After a migration you see an email from dev to DBA. The DBA responds “What are these delta scripts you speak of?”.
  • Your devs drop a prod patch at 5PM on a Friday as they all head home.
  • Dev says “Please try to repro the bug again, I didn’t do anything to fix it…I’m just hoping it got indirectly fixed”
  • Dev says “I marked the bug fixed but I have no way to test it.”
  • After a week of chasing and logging nasty intermittent bugs, you start seeing emails from your devs to your config managers saying stuff like “Why are these QA service endpoints still pointing to the old QA server?”
  • Your Config Manager says “Did you sanity test that patch I rolled out to prod when you were at lunch?”.
  • Your dev says “we don’t really care if the code we write is testable or not”.
  • Your bug gets rejected with the comment “It works on my box”.
What's on your list?

Test Manager: Remember, we're Software Testers, not some sorry-ass QA Analysts. We're elite. Let's act like it out there. Hoo-ah?

Testers: Hoo-ah!

You arm yourself with TestA and prepare to battle your AUT.

Take a deep breath, head into the AUT, and begin executing TestA. Prior to observing your expected results, you determine TestA is blocked from further execution (call it BlockageA). You make a note to investigate BlockageA later. You modify TestA slightly to give it a workaround in an attempt to avoid BlockageA. TestA encounters BlockageB. Now you decide to deal with BlockageB because you are out of workarounds. Is BlockageB a bug? You can’t find the specs related to BlockageB. After an hour, your BA finds the specs and you determine BlockageB is a bug (BugB). You check the bug DB to see if this bug has already been logged. You search the bug DB and find BugC, which is eerily similar but has different repro steps than your BugB. Not wanting to log dupe bugs you perform tests related to BugB and BugC to determine if they are the same. Finally you decide to log your new bug, BugB. One week later BugB gets rejected because it was “by design”; the BA forgot to update the feature but verbally discussed it with dev. Meanwhile, you log a bug for BlockageA and notice four other potential problems while doing so. These four potential problems are lost because you forgot to write a follow-up reminder note to yourself. Weeks later BlockageA is fixed. You somehow stayed organized enough to know TestA can finally be executed. You execute TestA and it fails. You log BugD. BugD is rejected because TestA’s feature got moved to a future build but dev forgot to tell you. Months later, TestA is up for execution again. TestA fails and you log BugE. The dev can’t repro BugE because their dev environment is inadequate for testing. Dev asks tester to repro BugE. BugE does not repro because you missed an important repro step. Now you are back at the beginning.

You’ve just experienced the "fog of test".

The "fog of test" is a term used to describe the level of ambiguity in situational awareness experienced by participants in testing operations. The term seeks to capture the uncertainty regarding own capability, AUT capability and stakeholder intent during an engagement or test cycle. (A little twist on the “Fog of war” Wikipedia entry)

Many (if not most) test teams claim to perform test case reviews. The value seems obvious, right? Make sure the tester does not miss anything important. I think this is the conventional wisdom. On my team, the review is performed by a Stakeholder, BA, or Dev.

Valuable? Sure. But how valuable compared to testing itself? Here are the problems I have with Test Case Reviews:

  • In order to have a test case review in the first place, one must have test cases. Sometimes I don’t have test cases…
  • In order for a non-tester to review my test cases, the test cases must contain extra detail meant to make the test meaningful to non-testers. IMO, detailed test cases are a huge waste of time, and often invaluable or misleading in the end.
  • From my experiences, the tests often suggested by non-testers are poorly designed tests or tests already covered by existing tests. This becomes incredibly awkward. If I argue or refuse to add said tests, I look bad. Thus, I often just go through the motions and pretend I executed the poorly conceived tests. This is bad too. Developers are the exception, here. In most cases, they get it.
  • Forcing me to formally review my test cases with others is demeaning. Aren’t I getting paid to know how to test something? When I execute or plan my tests, I question the oracles on my own. For the most part, I’m smart enough to know when I don’t understand how to test something. In those cases, I ask. Isn’t that what I’m being paid for?
  • Stakeholders, BAs, or Devs hate reading test cases. Zzzzzzzzzz. And I hate asking them to take time out of their busy days to read mine.
  • Test Case Reviews subtract from my available test time. If you’ve been reading my blog, you know my strong feelings on this. There are countless activities expected of testers that do not involve operating the product. This, I believe, is partly because testing effectiveness is so difficult to quantify. People would rather track something simple like, was the test case review completed? Yes or No.
I’m interested in knowing how many of you (testers) actually perform Test Case Reviews on a regular basis, and how you conduct the review itself.

Think of a bug…any bug. Call it BugA. Now try to think of other bugs that could be caused by BugA. Those other bugs are what I call “Follow-On Bugs”. Now forget about those other bugs. Instead, go find BugB.

I first heard Michael Hunter (AKA “Micahel”, “The Braidy Tester”) use the similar term, “Follow-on Failures”, in a blog post. Ever since, I’ve used the term “Follow-On Bugs”, though I never hear other testers discuss these. If I’m missing a better term for these, let me know. “Down-stream bugs” is not a bad term either.

Whatever we call these, I firmly believe a key to knowing which tests to execute in the current build, is to be aware of follow-on bugs. Don’t log them. The more knowledgeable you become about your AUT, the better you will identify follow-on bugs. If you’re not sure, ask your devs.
Good testers have more tests than time to execute them. Follow-on bugs may waste time. I share more detail about this in my testing new features faster post.

I’ve seen testers get into a zone where they keep logging follow-on bugs into the bug tracking system. This is fine if there are no other important tests left. However, I’ll bet there are. Bugs that will indirectly get fixed by other bugs mostly just create administrative work, which subtracts from our available time to code and test.

The Fantasy of Attending all Design and Feature Review Meetings

Most testers find themselves outnumbered by devs. In my case it’s about 10 to 1. (The preferred ratio is a tired discussion I’d like to avoid in this post.)

Instead, I would like to gripe about a problem I’ve noticed as I accumulate more projects to test. Assuming my ten devs are spread between five projects (or app modules), each dev must attend only the Feature Review/Design meetings for the project they are responsible for. However, the tester must attend all five. Do you see a problem here?

Let’s do the math for a 40 hour work week.

If each project’s Feature Review/Design meetings consume eight hours per week, each dev will have 32 hours left to write code. Each tester is left with ZERO hours to test code!

The above scenario is not that much of an exaggeration for my team. The tester has no choice but to skip some of these meetings just to squeeze in a little testing. The tester is expected to "stay in the know" about all projects (and how those projects integrate with each other), while the dev can often focus on a single project.

I think the above problem is an oversight of many managers. I doubt it gets noticed because the testers' time is being nickel and dimed away. Yet most testers and managers will tell you, “It’s a no-brainer! The tester should attend all design reviews and feature walkthroughs…testing should start as early as possible”. I agree. But it is an irrational expectation if you staff your team like this.

In a future post, I'll share my techniques for being a successful tester in the above environment. feel free to share yours.



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.