Testers are weird. When they find a bug in their AUT they feel good. But when it comes to integration testing, testers feel a sense of defeat when a bug is found to be in the AUT they are responsible for, rather than the AUT the other tester is responsible for. Can you relate? Stay with me.

When discussing integration testing observations, devs, testers, and business people say things like…

we do this”
“then we do that”

…to describing the application they associate themselves with. And if the discussion includes an external application, folks start saying things like:

“when they send us this, we send them that”
they create the XML file and we import it”

I hate when people use subjective personal pronouns instead of proper names. For one thing, it would be easier to understand a sentence like “Application_A sends the file to Application_B”, than a sentence like “They send us the file”. Afterall, they are called subjective personal pronouns. But the other reason I hate this way of communicating is that it reinforces an unhealthy sense of pride. People connect themselves, too intimately, with the application they refer to as “we”. People are biased towards the group they belong to. They start to build a bubble around their app; “It’s not our bug, it’s their bug”.

My little language tip may seem trivial, but think about it the next time you discuss system integration. If you resist the urge to use subjective personal pronouns, I think better communication will occur and your ego will be less likely to distract from effective teamwork.

“Test early” has been banged into my head so much, it has ruined me.

I just started testing a new app with business language/processes that I am unfamiliar with. While waiting on a UI, I began testing the services. I also selected an automation framework and tried to write some functions to leverage later via an automation library. At first, I was proud of myself for testing early. I did not have to make decisions about what to test because there was so little to test, I could just test it all! How nice. Things would soon change however.

As I sat through the domain walkthroughs, I realized I was learning very little about the complex functionality that was coming. I didn’t know which questions to ask because each business process was an enigma. The more I hid my confusion, the less valuable I felt, and the less I knew which tests to execute.

Finally, I broke out of my bubble and set up a meeting with the primary business oracle. Knowing close to nothing about the business side of the app, I asked the oracle one simple question:

“Can you walk me through the most typical workflow?”

She did. And even if only 10% of what she explained made sense, it became my knowledge base. Later, I could ask a question related to the 10% I understood. If I understood the answer, now I understood 11% of the app. And so on. Knowledge leads to confidence. Confidence leads to testing the right stuff.

So don’t get wrapped up in all the fancy "test early" stuff that makes for impressive hallway discussions. Start with the simple, low-tech approach of learning what your AUT is supposed to do.

Being technical <> being valuable.

Chances are, your AUT has some items that can be deactivated somewhere; probably in an admin screen. This is a great place to catch some serious bugs before they go to prod. Here are a few tests you should execute.

Start with the easy ones:

1. Make ItemA “in use” (something in your AUT depends on itemA).
2. Attempt to deactivate ItemA.
Expected Results: ItemA cannot be deactivated. User communication indicates ItemA is in use.

1. Deactivate an unused item (call it ItemA).
2. Attempt to use ItemA somewhere (e.g., does ItemA display in a dropdown menu?).
Expected Results: ItemA cannot be used because it is unavailable.

Then try something more aggressive:

1. UserA opens a UI control that displays ItemA as a potential selection.
2. UserB deactivates ItemA (e.g., from an admin screen).
3. UserA selects ItemA from the UI control.
Expected Results: ItemA cannot be used by UserA because it is unavailable. Communication to UserA explains ItemA is inactive.

Got any good variations?

Just before the holidays, we went live with another relatively huge chunk of users who require slightly different features than our previous users. The bug DB is quickly filling up with bugs discovered in production. These bugs are logged by the business/support arm of our team because testers can’t keep up. Many of the bugs don’t have repro steps and appear to be related to multiple users, performance, deadlocking, or misunderstood features. Other bugs are straight forward; oversights uncovered after users take the app through new paths for the first time.

My team is struggling to patch critical production issues to keep the users working through their deadlines. I want to investigate every new bug to determine the repro steps and prepare for verifying their fixes. Instead, I’m jumping from one patch to the next, attempting to certify the patches for production. Keeping up is difficult. New emails arrive every few seconds.

This is an awkward phase, but the team is reacting well, maintaining a good reputation for quick fixes. Nevertheless, I’m stressed.

Am I doing something wrong?
Do I suck for letting these bugs get to prod in the first place?
Should I be working late every night to clean up the bug DB?
Am I the bottleneck, too slow at getting patches to users?
Should I certify patches that are only partially fixed?
Should I be writing new tests to verify these bugs and prevent them from returning?
Should I be out in the trenches, watching the user behavior?

Can you relate?



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.