I can’t help but compare my job to that of my fellow developers.

At first glance, it appears my devs have the more challenging job. They have to string together code that results in a working application…usually based on ambiguous specs full of gaps.

But at second glance, I think the testers have it harder. Developers have a clear target to aim for. It’s called “Code Complete”. After which, their target may become “Fix The Bugs”. Each is a relatively objective target when compared to those targets of testers like “Write the Test Cases” or “Find the Bugs” or “Ensure the Quality”.

Arguably, a tester’s job is never complete because there is an infinite amount of tests to run. A dev can sit back and admire a stopping point where their code does what the feature is supposed to do. The tester cannot. The tester is expected to go beyond verifying the code does what the feature is supposed to do. The tester must determine the code’s behavior under all possible paths through the application in various configurations. If the tester is attempting to use thorough test automation it would require more code to support the automated test library than that of the AUT itself. Even then, there would still be more tests left to automate.

It may be worth noting that I’ve always wanted to be a developer. Why aren't I? I don’t know, I guess it seems too hard…

What do you think? Who has the more challenging job?

I'm in love with the word “appears” when describing software. The word “appears” allows us to describe what we observe without making untrue statements or losing our point. And I think it leads to better communication.

For example, let’s say the AUT is supposed to add an item to my shopping cart when I click the “Add Item” button. Upon black box testing said feature, it looks like items are not added to the shopping cart. There are several possibilities here.

• The screen is not refreshing.
• The UI is not correctly displaying the items in my cart (e.g., the database indicates the added items are in my cart).
• The item is added to the wrong database location.
• The added item is actually displaying on the screen but it’s not displaying where I expect it to.
• A security problem is preventing me from seeing the contents of my cart.
• Etc.

The possibilities are endless. But so are the tests I want to execute. So like I said in my previous post, after determining how much investigation I can afford, I need to log a bug and move on. If I describe the actual result of my test as “the item was not added to my cart”, one could argue, “yes it was, I see it in the cart when I refresh...or look in the DB, etc.”. The clarification is helpful but the fact is, a bug still exists.

Here is where my handy little word becomes useful. If I instead describe the actual result as “the item does not appear to be added to my cart”, it becomes closer to an undisputable fact. Once you begin scrutinizing your observation descriptions, you may find (as I did) yourself making statements that are later proven untrue, and these may distract from the message.

Think about this a little before you decide this post appears to suck.

Okay, we (tester) found a bug and figured out the repro steps. Can we log it now or should we investigate it further? Maybe there is an error logged in the client’s log file. Maybe we should also check the server error logs. And wouldn’t the bug description be better if we actually figured out which services were called just before said error? We could even grab a developer and ask them to run our test in debug mode. Or better yet, we could look at the code ourselves because we’re smart tester dudes, aren’t we?

If you thought I was going to suggest testers do all this extra stuff, you’re wrong. I've read that we should. But I disagree. We’re the tester. We test stuff and tell people when it doesn’t work. We don’t have to figure out why. If we’ve got the repro steps, it may be okay to stop the investigation right now. That’s the whole point of the repro steps! So the Dev can repro the bug and figure it out.

Look, I get it...we’re cool if we can dig deep into the inner workings of our AUT and maybe we’re providing some value added service to our devs. The problem is, we’re not the devs. We didn’t write it. Thus, we are not as efficient as the devs when it comes to deep investigation. And time spent investigating is time NOT spent on other testing. For me, everything must be weighed against the huge number of tests I still want to run.

So unless you’ve run out of good tests, don’t spend your time doing what someone else can probably do better.

What do you think?



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.