Song to the famous Christmas Song, "Let it Snow, Let it Snow, Let it Snow"

The performance test stats are frightful,
Useability is not delightful,
I expect the bug count to grow,
App is slow! App is slow! App is slow!

It doesn't show signs of stopping,
And this build is really flopping,
My confidence is way down low,
App is slow! App is slow! App is slow!

When the hour glass goes away,
And the screen finally starts to repaint,
There's a timeout error trying to say,
Ready for prod, this thing ain't!

Available memory is slowly dying,
But, my devs, are still denying,
They say it's my box, but I know,
App is slow! App is slow! App is slow!

Last year's Twas the Night Before Prod Release was much better.


Last week a group of us testers, programmers, and business analysts volunteered to sort, inspect, and pack donated food, beverages and health products at the Atlanta Community Food Bank. Most cities have similar gigs and this is a fun way of doing volunteer work that is related to the old school usage of the term QA. If you’re a tester, volunteer to be an inspector.
As an inspector, you’ll go through all the donated items and decide which get packaged up to become meals, which get thrown in the trash, or which need special treatment. This is QA at its most primitive and I found it incredibly fun because unlike software testing, I actually got to make my own decisions as far as what would go to production and what would not.
  • Black Box Testing - Regular food (e.g., canned soup, cereal, pasta) would get thrown away if its expiration date was <>
  • White Box Testing – Sometimes food looks good on the outside, but if you screw off the peanut butter jar lid and look inside, you may find someone has broken the inner seal and helped themselves to a few spoonfuls.
  • Cosmetic Testing – Cans with no labels or bags of food without an ingredient listing were rejected.
  • Bugs – Literally. Reject it if the food was exposed. Finding damaged secondary packaging (e.g., cardboard cereal box ripped but plastic bag intact) was like finding a software bug. Let the dev tape it back together, then you can accept it.
  • Usability Testing – Some cans are smashed; can they easily be opened? Is it so smashed the user can injure themselves on sharp corners?
  • Security Testing – Was the product recalled? Example: Campbell’s Spaghettios were recalled due to uncooked meatballs. Fail!
Things that helped us succeed:
  • Motivation – we were told what the average work performed by volunteers was. We consider ourselves above average so we set a goal to beat the norm. Thus, we worked hard to achieve our goal and prove we were the best.
  • Roles – We were assigned to roles as either inspector, sorter, or packer. When any role was caught up, we switched roles to keep busy and keep things moving.
  • Collaboration – “I forgot, what do we do with baby food?”, “What is the rule for powdered milk?”, “Can you hold this shut while I tape it up?”. We worked side-by-side in the same room and answered each other’s questions.
  • Breaks – They forced us to take breaks. Without the breaks, fatigue may have set in. The breaks allowed us to reflect and exchange stories about mistakes we had made (spilling salt all over the place) or gross encounters (sticky stuff leaking out of jar). This also motivated us to work harder after the break, attempting to encounter good story material. They gave us free beverages and chocolate during the break (donated chocolate can’t get packaged because it melts), which means we didn’t have to be distracted with food/beverage as we worked.
  • Oracles – When we couldn’t help ourselves, we knew which volunteers were the oracles. They knew all the answers based on prior experience.
  • Music – Who doesn’t love listening to Michael Jackson’s Thriller?
After a mere two hours of work, we beat the average by packaging 8,109 lbs of food, which translates into 5,460 meals.

...if only we could weed through software bits this quickly.

Richard Siemens responded to my Testers, Stay Frosty. Fight Tester Fatigue post and asked if I have any ideas on how to combat tester fatigue. I’m glad you asked, Richard, because my previous post was too lame to suggest any.

Here is what I think:

Stuff the tester can do to combat test fatigue:
  • Treat yourself to static goals on occasion. For example, if you say “my goal today is to verify all the fixed bugs currently on this list”, it has a firm stopping point; the current list does not include the bugs that will be on that list in an hour. However, if you say “my goal today is to verify all the fixed bugs”, I’ll bet the devs are cranking out the fixes just as quickly as you verify them and the list just won’t clear. How unsatisfying. It’s like walking up the down escalator.
  • Use an approach similar to Session Based Test Management where you follow a time-blocked mission. As you encounter bugs that feel annoying because they take you off track, just ignore them! That’s right…at least ignore them for now. Make a quick note, then go back and investigate said bugs when you and your team decide it is time.
  • Show your manager a list of all the stuff you need to do and ask them to prioritize it. You may be surprised at some of the stuff with a low priority. In fact, if you make that list long enough, you’ll probably get some stuff knocked off it.
  • Stop Testing and do something else. Take time each week to improve your testing skills. And don't tell me you're too busy. I don't buy it. It’s those testers who get complacent and test the same way day-after-day that make us look bad. Take a few hours each week to read testing blogs, start your own blog, write a program, improve your typing skills, read a chapter from a testing book, or just hang out in the break room chatting it up with your support team. Any decent manager will appreciate a happy employee taking a break to become a better tester in some way. Because any decent manager knows a better tester gets more work done when they do test.

Stuff the test manager can do to combat test fatigue:
  • Be more careful about what you reward. Instead of placing so much weight on completing test work on time. Go out of your way to reward discovered bugs, even right before deadlines; “Nice bug catch, that was close!”.
  • Assign non-testing tasks to testers. As testers, it’s refreshing to work on non-testing tasks every so often because testers can control non-testing tasks. These have hard and fast completion criteria and you only have to worry about doing it one way. Examples include:

    Organizing regression tests
    Building test metric report
    Giving tester presentations
    Organizing a team outing
    Facilitating a retrospective
    Updating test environments
    Engineering process improvements
  • Let the whole development team know how much work the testers are doing. Let the testers know too. An easy way to do this is nightly team test status emails.
  • Encourage a rest period during test cycles.

    Every morning, before work, I put in 40 minutes on my YMCA’s StairMaster, whilst reading my Economist. I use the “Interval” workout setting, which requires two input parameters; Workout Level & Rest Level. Over the last two years, I’ve steadily been increasing the Workout Level, while keeping the Rest Level about the same. When the machine kicks in to the Workout Level, I give it everything I’ve got. Three minutes later, when I’m about to give up, it rewards me with the Rest Level and man does it feel good. Then the cycle repeats.

    I think testers should follow a model similar to my StairMaster’s Interval workout. My teams have 4 week iterations. I stay out of everybody’s hair during week 1 and try to set the mood that it's a rest week. We still attempt to test as early as possible, we just don’t crank up the intensity until about week 2 and 3. We finish all new feature testing by end of week three, then begin to come back down during week 4 by concentrating on regression testing.

    Just like my workouts, as we grow our tester skills, product knowledge, and team cadence, we should be able to increase our test level with each iteration, while allowing ourselves to maintain the same rest level.

Based on my own test experiences and those of my testers, I've noticed the following.

At the start of a test cycle, if your test fails, your likely reaction is:
“Yeah, baby, it failed! Yesssss! I rock!”

At the end of a test cycle, if your test fails, your likely reaction approaches:
“Damn! I can’t believe it failed. We’re never going to get this done in time. ...On second thought, maybe it didn’t actually fail. Maybe I did something wrong, I heard the DBA was doing some kind of maintenance, maybe that was the problem. Besides, the production servers are much faster, I’m sure they would work better. Perhaps if I reboot and try again, it will work. Then I won't have to tell anybody.”

Can you relate on some level? Finding bugs in fresh software gives us a rush. We joke about it; “Let me sink my teeth into your code!”. But after a while, we get tired of finding bugs. We just want stuff to work so we can move on. As we approach the ship date, we start to feel frustrated when stuff crashes. We’re actually…wait for it…disappointed to find another bug. We wish the test had passed.

Testers, be careful. Don’t ever let yourself grow tired of finding problems. When that happens, your ability to investigate diminishes and your team value drops. I call this “Tester Fatigue”. Being aware of tester fatigue is probably all you need to know to avoid it and stay frosty.

It’s just not fair.

The better we test, the more we appear to not meet our deadlines.

Skilled testers provide more feedback than unskilled testers. The skilled testers find more bugs and raise more questions. The more bugs found, the more testing is required to verify the bugs. The more bugs that are fixed, the more testing is required to regression test.

The unskilled tester scratches the surface. If no bugs or questions are discovered and little feedback (e.g., test results) is produced, the unskilled tester calls it a day at 5PM everyday and naively goes home to watch TV. It’s possible to get away with this, especially when the missed defects are never discovered in production, and those that are, may be written off as too difficult to catch in test. Poor performers can hide well in the test world. You may know some.

What can we do about this frustrating injustice?

Reduce feature ownership.

The above paradigm may be partly the result of feature ownership. If the testers are each assigned certain features to test and therefore only responsible for seeing those features through to production, we see the unskilled tester rewarded with easily meeting deadlines, and the skilled tester pulling her hair out, trying to keep up.

Test managers have some control over this. They can ask the unskilled tester to assist the skilled tester with less cognitive tasks, such as bug verification or regression testing. This helps accentuate the team mentality, that nobody goes home until all features are fully tested. Most Agile teams are already doing this but I suspect the unskilled testers still manage to provide less value on Stories they pull from the task board.


Deadlines are not the main goal.

In most trades, we reward people for getting work done on time. Perhaps in testing, we should stop doing this. It’s almost as if we should do the opposite; reward testers for managing to keep the team busy fixing problems and thus not meeting the deadline.

I’m exaggerating, of course, but when we tell testers to “get this tested well and on time”, there is a conflict of interest. To make matters worse, it’s easier to look at a clock and say “great job, tester, you completed the testing on time” than it is to look at a piece of software and say “great job, tester, you tested this well”.

Let’s not forget to celebrate the efforts of those testers who always seem to be swamped and having a tough time meeting team test deadlines. They need a break sometimes too.

After attempting to use Microsoft Test Manager 2010 for an iteration, we quickly decided not to use it. Here is why.

About 3 years ago we finally managed to stop using HP Quality Center (AKA Test Director). We started managing our test cases and bugs as work items in Microsoft Team Foundation Server (TFS). The brilliance behind this transition was that it gave us the ability to attach our tests and bugs to the iteration’s Features/Stories and Tasks. This meant, as Features move in and out of iterations, the related tests and bugs follow; a beautiful thing! Additional benefits are, 1.) Programmers and BAs can easily review our test cases and write reports based on their execution status and, 2.) no more attempting to synch the TFS Features to Quality Center Requirements…the horror! I totally hated that!

It turns out, Microsoft Test Manager 2010, which sits on Microsoft TFS, took away many of the above TFS benefits and added as much overhead as Quality Center.

Stuff We Didn’t Like About Microsoft Test Manager 2010:

  • Before one can write tests, one must set up a Test Suite. Test Suites do not update when the Feature set changes. Thus, one must manually keep one’s Test Suite synched with one’s iteration. To further frustrate, without customization, Test Manager does not let one write test cases for Task work items.
  • Test cases in Test Manager follow a different workflow than those of TFS. The result is, nobody on your team can see which of your tests pass or fail unless they open Test Manager, which they probably don’t have a license for (unless they are testers). The reasoning behind this is probably Test Manager’s test cases can have test case run execution history (e.g., TestA could be passed in one build and failed in another build at the same time ). Test case run execution history is actually cool. In fact, it was one of the biggest motivators for us to try Test Manager. However, we were hoping Test Manager would trickle down the results to TFS so the whole team could benefit.
  • One of the most annoying parts of our Test Manager trial may have been its usability and performance. The screens frequently hung, causing some testers to force quit and re-launch. The navigation was also awkward. It took too many clicks to get anything done.
  • To update the execution status of Test Manager tests, one must go through the ridiculous test case executor. This is the thing that shows you each test case step and asks you for a pass/fail result. I can’t imagine anyone actually testing like this. Quality Center had something similar but provided an alternate method of updating execution status (i.e., one could bulk update via a grid).
  • Our other gripe about updating the execution status of Test Manager tests is that the test case “summary” does not show. Most testers like to write their test cases as fragments, using the test case work item’s free-form summary tab. The summary tab is preferred over the grid, which forces tests into individual steps with expected results. The big joke is, if you write your tests in the summary tab, it is not possible to see them while running the silly test case executor. So you are presented with a blank test step and asked if it passes or fails.

Stuff We Think We Like About Microsoft Test Manager 2010:

There are two things some project teams are planning on using Test Manager for in the near future:
  • Calling our CodedUI test methods and passing in parameters. According the idealistic demo I saw at the Stareast Microsoft booth, this allows manual testers to write/execute automated tests without coding.
  • Using test case run execution for regression testing.

Conclusion:

I guess TFS's simple work item model fits our needs for flexible lightweight test documentation with little administrative overhead. Maybe someone can convince me otherwise.

Many testers have chosen to make their jobs stressful by taking on more responsibilities than they should, obscuring their skills with those of others on their teams. Choosing to make your job less stressful will not only help you enjoy testing, it will also allow you to focus on testing, and improve your standing as a test leader. Here are some things I’ve learned over the years…

  • When people ask me “Did you QA Certify this for production?”, I remind them the question of when to ship is a business decision, but I can tell them much of what they need to know (e.g., how it currently works under certain conditions, what bugs exist) to make that business decision.
  • When I hear users complain about the product not working for them due to the way it was designed, I feel empathy for the users….then I remind myself that I never tell BAs/devs how to build the product or what it should do.
  • When I’m faced with really scary technical things to test, I turn to my team. I only have to look stupid to the first person I talk to, because by the time I get to the second person, at least I have what I learned from the first. As I continue to share my test ideas, they gradually change from lame to sophisticated. Soon, I realize everybody else on my team was just as confused as I was.
  • Crunch time. I expect it. I chillax at the beginning of an iteration and work harder/longer towards the end. I maintain my work/life balance by keeping my personal calendar free right before and after a production release. Working late with other team members is often just as fun as spending a quiet evening at home with my wife (don’t worry, she doesn’t read my blog).
  • Too much to test! Too little time! This one still stresses me out on occasion. But when I’m thinking rationally, I pose the question to my BAs, “I have 2 days left, would you prefer I test these new features or focus on regression testing?”. It trains them to respect my schedule and understand that it is finite. It also shows that I respect their business sense and value their opinion.
  • Your product went live and the quality sucks. Okay, you can feel somewhat guilty...along with your devs and BAs. But remember, you didn’t code or design it. Quality can only be added by the programmers (e.g., if you have no code, you have no quality.). If it sucks now, just think about how much it would have sucked before those 471 bugs you caught!
What things do you do to make testing less stressful and maintain your sanity?

"QA"

It’s like there aren’t enough testing-related terms out there so people have to just use the word “QA” to talk about anything in the testing domain.

  • “Ask QA to test this.” (It’s a group of people)
  • “QA is down again!” (It’s a test environment)
  • “We’ll need someone to QA this before it goes out.” (It’s an official action)
  • “Is it in QA”? (You’re either asking if someone is testing it or you’re asking if code resides in a specific environment)
...sigh

By speaking properly, ourselves, perhaps we can change this.

  • “Ask the testers to test this.”
  • “Environment [insert an environment name, don't name it "QA"] is down again!”
  • “We’ll need someone to test this before it goes out.”
  • "Is it being tested by the testers?"

If you read my previous post, you know I help test a product on iteration 80. This product has few automated regression tests (we are working to change that). Currently, my test team faces regression testing armed with nothing more than their brains, their hands, and a big ticking clock.

Each 4 week iteration provides us with a little less than a week to spend regression testing. If you can relate to my situation, you may be interested in two painless practices we have adopted to better handle our ever increasing challenge of regression testing.

  1. Group Regression Test Planning – We get the devs, BAs, and testers into a room and spend an hour brainstorming, to determine the top priority regression test areas of focus. We scan through the iteration's changes and draw on each other’s knowledge to create a draft of what will become the priority “1” tests or areas of focus. This occurs on a shared MS Excel regression test list spreadsheet. The prior iteration’s regression tests drop in priority...and so on.

  2. Group Regression Test Sessions – We fill a large bowl with chocolate and invite the BAs and devs to join the testers in our classroom for two 3-hour sessions of group regression testing. All participants track their progress on the shared regression test list and we tackle it as a team. The team approach also occasionally shakes out multi-user-scenario-bugs, since our product is heavily dependent on user locking.

Our approach is not to complete a set amount of regression tests before we ship. Instead, we complete as many tests as we can, within the time provided. The prioritized test list makes this possible and shields us from things outside of our control.

Is anyone else doing regression testing with mostly manual testing? How do you pull it off?

Testing updates to old software is much more difficult than testing updates to new software. The older it gets, the more cobwebs form in lonely corners, which drop out of test rotation and concern.

One of my software projects is on its 5th year of steady updates (iteration 80). In software years, this product is 80-years-old. It makes me think of my house, which happens to turn 80-years-old this year.

Seemingly easy upgrades to my old house often result in way more changes than initially planned. When my wife talked me into replacing the cracked pink bathroom tiles, I soon uncovered 6” deep concrete poured between joists that had all but rotted through (due to 79 years of direct contact with concrete.


In the 1920's, I’m sure floating concrete floors were an excellent choice for tiled floors. And fortunately, the old growth pine joists were strong enough to hold the weight…but not for 80 years.

One thing leads to another and before we knew it, we were rebuilding our entire bathroom, including the floor.


Then the changes start to expand way out of scope. The walls will not be compatible with the new floor, so those will have to be replaced (but maybe we'll just patch and replace the worst parts of the outside wall).

Then you realize older technologies are not compatible with newer. You mean Home Depot doesn’t sell Quest plumbing fittings anymore? You mean I have to replace my polybutylene pipes with copper just to move my toilet?

Meanwhile, back at work, my users were asking for the seemingly easy upgrade to allow filtering in the product’s massive control-center-like grid. Sure, no problem…oh, hold on…we can do it but the current grid won’t support it so we’ll have to rebuild the entire module, a testing nightmare (depending on your perspective).

So be careful with work estimates on seemingly simple changes to older products. And try to remember all the often ignored, dependent systems that are affected.

Yesterday a manager told me they were troubled by a showstopper bug that was found after my team had finished testing a particular build.

(My team uses the “Time’s Up!” heuristic to determine when we’re done testing.)

Then the manager asked me what I could do to prevent this from happening again. I thought long and hard on this question and just when I was about to concede defeat, the ultimate solution came to me.

It may take several iterations, but if my team can build a time machine, we can guarantee no showstopper bugs will escape. I think it will really shorten our test cycles too. We’ll just throw stuff to the users, see what problems they encounter, then travel back in time and log it as a Showstopper...or maybe we'll call it a "Show Stopped". Better yet, we’ll just explain it to dev before they write it.

I just have to work out the predestination paradox, so I don’t get stuck in a causal loop, eternally traveling back in time to prevent the same bug from occurring.


My favorite recent time travel movies:

Your application under test (AUT) probably interfaces with external systems. Fact: these external systems will go down at some point while a user is attempting to use your AUT.

Here is the obvious test:

  1. Take ExternalSystemA down. If this is outside your control, simulate ExternalSystemA’s outage by changing where your AUT points for ExternalSystemA.
  2. Trigger whatever user operations cause your AUT to interface with ExternalSystemA.
Expected Results: The user gets a friendly message indicating some functionality is blocked at this time. The support team gets an error alert indicating ExternalSystemA is not responding.



We executed the above test for 6 or 7 external systems and got our AUT robust enough to only block minimum functionality and provide good communication to users and the support team. However, just when we were getting cocky, we encountered a slight variation on the first test that crippled our AUT. Here is the test we missed for each external system.

  1. Put ExternalSystemA into a state where it is up and running but cannot respond to your AUT within the amount of time your AUT is willing to wait. Note: We were able to simulate this by taking down ExternalSystemB, which gets called by ExternalSystemA.
  2. Trigger whatever user operations cause your AUT to interface with ExternalSystemA.
Expected Results: The user gets a friendly message indicating some functionality is blocked at this time. The support team gets an error alert indicating ExternalSystemB is not responding.

Question: What metric or easily understood information can my test team provide users, to show our contribution to the software we release?

I just got back from vacation and am looking at a beautiful pie chart that shows the following per iteration:

  • # of features delivered
  • # of bugs found in test vs. prod
  • # of bugs fixed
  • # of test cases executed
After a series of buggy production releases, my team (or at least the BAs) have decided to provide users with colorful charts depicting how hard we’ve been working each iteration. My main gripe is providing my BAs with a # representing executed test cases.

First, I feel uncomfortable measuring tester value based on test case count, for obvious reasons.

Second, the pie chart looks like all we do is test. One slice lists 400 tests. Another lists 13 features...strange juxtaposition.

Third, I’m not even sure how to provide said count. I certainly don’t encourage my test team to exhaustively document their manual test cases, nor do I care how many artifacts they save distinct tests within. Do I include 900+ automated UI test executions? Do I include thousands more unit test executions? Does the final # speak to users about quality? Does it represent how effective testers are? Not to me. Maybe it does to users...

PR is important, especially when your reputation takes a dive. I, too, want to show the users how hard my QA team works. I want to show it in the easiest possible way. I could provide a long list of tests, but they don't want to read that. What am I missing? What metric or easily understood information can my test team provide users, to show our contribution to the software we release?

We have a new mantra on my team.

“Don’t fix bugs unless users want them fixed.”

When testers found bugs that were already in production, we used to just fix them. Soon we realized, fixing them may do more harm than good. For example, we have a time control in one of our apps that accepts input in a format like HH:MM:SS. We noticed inconsistent behavior across instances of these controls in the app; things like some control instances would force leading zero while others would not, some would allow double-clicking-to-select time units while others would not.

We logged a bug to standardize these time controls throughout our app. When the bits went to prod the users screamed bloody murder. They hated the change and complained about disruptions. It turns out the users didn’t even know the inconsistency existed in the first place. As devs, BAs, and testers, we’re in and out of said time controls all over the app. But in production, users tend to only work in one of about 10 different modules based on their jobs. They could care less how the time control worked in neighboring modules.

“Don’t fix bugs unless users want them fixed.”

This mantra also applies to larger problems testers find. A room full of devs and testers can pat themselves on the back, thinking users will love them for certain bug fixes, only to find the users had adjusted to the broken code and want it back. And the danger increases the longer your app has been in production.

What’s your definition of a bug? My favorite is one I learned at Michael Bolton’s Rapid Software Testing class.

A bug is something that bugs someone who matters.

Keep that in mind the next time your team starts fixing bugs the users haven’t complained about.

“We don’t need no stinkin’ sunscreen lotion.”

“Cigarettes pose no health risks.”

“Drill, baby, drill!”

“The magnetic switch on my tablesaw is fool-proof.”

“Oh! I just came up with a killer test. It’s totally going to fail! I know the dev is not coding for this scenario. In fact, the whole team will be impressed that I even came up with this brilliant test. Dude, I can’t wait to see the look on the developer’s face when he finishes his code and I log this bug. He’s totally going to have to refactor everything. I’m such a sneaky tester, he he he…”




This post was inspired by a comment made by my former QA Manager and mentor, Alex Kell, during an excellent agile testing presentation he recently gave.

I love all the insightful responses to my To Bug Or Not To Bug post. Contrary to the voting results, the comments indicated most testers would log the bug.

“When in doubt, log it”
“Always error on the side of logging it”
“Log everything”
“The tester needs to log the bug so you have a record of the issue at the very least”

These comments almost convinced me to change my own opinion. If I were the tester, I would not have logged said bug. I have a model in my head of the type of user that uses my AUTs. My model user knows the difference between a double-click and a triple-click. And if they get it wrong, they are humbled enough not to blame the software.

But the specifics on this bug are not my main thought here.

Within the last 6 months, I’ve started to disagree with some of the above comments; comments I used to make, myself. As testers, it’s not up to us to decide which bugs to fix. I agree. But since we have the power to log any bug we choose, we need to make sure we don't abuse this power.

  • Bugs create overhead. They have to be logged properly with repro steps, read and understood in triage meetings, tracked, and assigned destinations. Bugs linger in developer bug queues, sometimes with negative connotations. All these things nickel and dime the team’s available work time.
  • Your reputation as a tester is partly determined by the kinds of bugs you log.
That being said, I’m giving this tester the benefit of the doubt. She has a very good track record of predicting user behavior. Her actual decision was to log the bug and offer to reject it if users don’t encounter the issue during UAT. Not a bad compromise, I guess. Another approach would have been to not log the bug unless the users notice it. Which is less work? The former has the advantage of a little CYA for the tester...which is unfortunately what we desire sometimes.

Yesterday I watched an interesting discussion between a really good tester and a really good developer. I don’t want to spoil the fun by adding my opinion…yet. So what do you think? Should the tester log a new bug? Please use the voting control below this post.

User Story: As a scheduler user, I want to edit a field with the fewest steps possible so I can quickly perform frequent editing tasks on the schedule.

Implementation: if users double-click on a target they get a window with an editable field and a flashing cursor in the field.

Tester:
I was testing this, and it seems to work if you do a distinct double click. It’s really easy to triple click, though, and then the cursor isn’t in the field even though [the window] is in edit mode. My feeling is that the users will see this as ‘it works sometimes and not others’. Is there any way to block that third click from happening if we get a double click from the users in that field?

Dev:
Not really, you would have to intercept windows events, and in that case you’re just masking and promoting users to continue to practice bad habits. The [problem] in this case would be especially bad, because they would double click in the field, and it wouldn’t even enter into edit mode. If they accidentally triple click, they can just click in the field and continue, but at least the control would be in edit mode.

Tester:
I just have a feeling we’re going to have complaints on it. I hadn’t actually realized I’d triple clicked several times, it just kept popping up in edit mode, sometimes with a cursor in the field and sometimes without. I’d thought it was only partially fixed until I realized that’s what I was doing.

Dev:
I see what you’re saying and I guess it’s fine to log a bug, but what is the threshold, is a triple click based on your average speed of clicking, a slow user’s speed of clicking, should I wait 100 milliseconds, 300? It would change from user to user. Windows clearly defines a double click event based on system settings that a user can change based on their system and their own speed of clicking. If we start inventing a triple click behavior, then we take over functions designed to be handled by the operating system that could easily introduce many other bugs. Detecting such an event requires a lot of thought and code, and would at best be buggy and worse introduce even more inconsistent behavior. Just my opinion on it though.

Should the tester log the bug?


I had the pleasure of eating lunch with Dorothy Graham at Stareast. Dorothy is the coauthor of “Software Test Automation”, which has been a well respected book on the subject for the last 10 years. A colleague recently referred me to a great article, “That’s No Reason to Automate!”, coauthored by Dorothy in the current issue of Better Software.

In the article, Dorothy debunks many popular objectives used for test automation and suggests more reasonable versions of each. This article is helping me wrap my brain around my own test automation objectives (I just hired a test automator) but it was also just great to hear a recognized test automation expert empower manual testers so much.

I'll paraphrase/quote some sentences that caught my attention and some of the (needs improvement) test automation objectives they contradict.

Objective: Automation should find more bugs.

  • “Good testing is not found in the number of tests run, but in the value of the tests that are run.”
  • The factor that determines if more bugs will be found is the quality of the tests, not the quantity. Per Dorothy, “It is the testing that finds bugs – not the automation”. The trick is to free up the tester’s time so they can find more bugs. This may be achieved by using automation to execute the mundane tests (that probably won’t find more bugs).

Objective: Automation should reduce testing staff.
  • More staff are typically need to incorporate test automation. People with test script development skills will need to be added, in addition to people with testing skills.
  • Automation supports testing activities but does not replace them. Test tools cannot make intelligent decisions about which tests to run and when, nor can they analyze results and investigate problems.

Objective: Automation should reduce testing time.
  • “The main thing that causes increased testing time is the quality of the software – the number of bugs that are already there…the quality of the software is the responsibility of the developers, not the testers or the test automators”

Objective: Automation should allow us to run more tests to get more coverage.
  • A count of the number of automated tests is a useless way of gauging the contribution of automation to testing. If the test team ends up with a set of tests that are hardly ever run by the testers, that is not the fault of the test automators. That is the fault of the testers for choosing the wrong tests to automate.

Objective: We should automate X% of our tests.
  • Automating 2% of your most important tests could be better than automating 50% of your tests that don’t provide value.

Note: The article and book were co-authored by both Dorothy Graham and Mark Fewster. Although I did not have lunch with Mark, I'm sure he is a great guy too!

In today's retrospective, a developer complained that he deployed a bug fix to testers and didn't hear any feedback until 5 days later, at which point the bug was reopened.

I'm embarrassed by the above occurrence because I'm a firm believer in providing feedback on new bits as quickly as possible. Let's say you have 5 equally complex features (user stories, whatever) to test by the end of the week. All 5 are ready for testing. One approach (Approach #1) would be to spend about a day on each feature.
If you manage to find bugs in some of these features, it's possible that we would not have enough time to get it fixed and retested. The problem gets worse if these are blocking bugs.
Approach #2 works under the assumption that blocking bugs will usually get discovered early and easily by executing your first tests (e.g., your happy path tests). If you do high level testing of all 5 features on day one, you can report the bugs sooner.
While said bugs are being fixed, you can dig deeper in the other areas. If it ain't broke, you're not trying hard enough, right?
Maybe by day 3 the blocking bugs are fixed and you can interrogate those areas again. And perhaps you can follow your tester skills for determining how to spend your remaining time.
Think about how often you've cracked open some new dev bits that have been sitting there waiting for days, only to find they blow up during your very first test. Some flavor of Approach #2 will help.

Thoughts? Arguments?


One of the bugs I was verifying included a very helpful dev comment in the bug report. The dev wrote:

“I came across this one and thought I'd just knock it out. No big deal, easy to do, easier to test.”

If you saw this dev comment in a bug report, what would you do? I’ll tell you what I did… sat down at my desk, rubbed my hands together like a fly, then attacked that fix like the world was watching. After a few tests I found the oversight I was expecting.

“thought I’d just knock it out”
“no big deal”
“easy to do”

….maybe that’s just a developer’s way of saying “Be careful, I did no testing whatsoever on this”.

I think all the project teams in my department are finally using Task Boards. We’re split between an Atlanta office and New York office so we still keep our electronic version as the master. We use Microsoft Team Foundation Server (TFS) to track all work items (Features, Tasks, Bugs, and Tests). We use Telerik Work Item Manager 2010 to print our work items into cute little cards with descriptions to move around on the task boards.

Each team came up with their own task board location and layout (e.g., column labels). Some teams use bug or test case task board items while others stick with just Features/User Stories. The variety of approaches is necessary because some teams work on smaller-scale low-risk projects while other teams work on highly complex SOX compliant applications that require more rigor.

THE GOOD
Here are a couple good ones.




...making your velocity public is classy.


THE BAD
This one is way too confusing. I think that white board would be much better used as a space to diagram ideas. I just see chunks of stuff. I have no idea what is being worked on.


THE UGLY
This one is just ugly. This was a technical debt iteration but after the initial card print-outs, everybody got lazy and just drew work item numbers with a marker. The testers wasted about 30 minutes per day trying to determine what each number represented so they could move them to the correct column as testing completed.



In Summary:

Some Disadvantages:
  • We often get TFS and its respective task board out of sync. This means the task board can mislead.
  • Adding/Moving work items on the task board is extra administrative work. Sometimes we get lazy and just write work item #’s on the white board. We quickly forget what these represent. That is one advantage to cork boards (you can’t write on cork boards).


Some Advantages:

  • Management types don’t have to ask as many questions, they can look at the task boards instead.
  • When stakeholders ask for extra stuff, we can point to the task board and say “no problem, just tell us what we should remove from the iteration”.- the team gets an extra simple view of how the work is proceding.
  • Instead of staring at each other’s ugly mugs, we can look at the task board during scrum meetings.
  • It is easier to get excited about team accomplishments when your work is public.

The most challenging presentation I saw at Stareast was by Google Senior Test Engineer, Goranka Bjedov. She makes the case that the world is heading toward developing software without testing for quality and that this practice may not be a bad thing. Scary but true!
First, Goranka pigeon-holed testing into two categories; productivity and quality. Her definitions (per my notes) are as follows:
Productivity Testing – Making sure programmers don’t break code (e.g., unit tests). Testing things consumed by machines. Anything consumed by machines is easy to automate. These tests are cheap, fast, well-defined. The problems failed tests expose do not require deep analysis.
Quality Testing – Testing things consumed by humans. Anything consumed by humans is not easy to automate and therefore difficult to test. Expensive. Tests become more flaky as the system becomes more complex. The right tests are not clear. Failed tests require deep analysis. These tests take longer.
With the promise of quicker software delivery, productivity testing has become more important than quality testing. Wake up, the world is already adapting in several ways.
For example, at Google, they know hardware and infrastructure will always fail. Instead of wasting time with exhaustive tests, their solution is to manage risk (e.g., build in seamless failovers and backups) and shield the user from the failures.
Goranka also countered that in cases where poor quality is seemingly not optional (e.g., medical software) users have already adapted by not relying on it. She claims users in hospitals, for example, know not to trust someone’s life in a piece of software. Instead, they monitor the patient as a human and understand that software is fallible.
These are excellent points, IMO, and I would have been satisfied contemplating a future where my job no longer existed...but hold on!
Goranka asked us to do a little exercise. She asked us to determine the rule used to generate these three sequences by writing five additional sequences of our own:
-25, -5, 15, 35, …
2, 4, 6, 8, …
0, 3, 6, 9, …
I don't want to give away her rule, but you can still try it on your own.
After surveying the audience, she pointed out that developers tend to write confirmatory tests more than testers, who tend to write more negative tests. Thus, perhaps testers do play an important role. She also questioned how much productivity tests actually tell us about the system as a whole. Her answer? …they tell us nothing.
In the end, she left us with this thought…
If you think (non-programmer) testers are important, you better start doing something about it.

Does your test manager ever test? They should.

I recently got promoted to test manager. About three months in, I started to get used to delegating much of the testing tasks. I have to admit, it was nice to focus on the big picture for a while. I began sounding like a manager, being more interested in status rather than test value.

When one of my testers took a vacation to India and the other two got sick, I had to jump in where my testers left off and complete a variety of testing activities. It was like getting smacked in the face.

  • In some places where I thought the tester was dragging, I discovered legitimate test impediments.
  • In some places where I believed the tester excuses, I found tester misunderstandings or poorly designed tests.
  • In all areas, I experienced the stress of uncertainty, the constant decision making, and the thrill of finding important problems.

The best way to really grok a job is to perform it yourself. Experiencing the act of testing is different than observing it or hearing summaries of it. So...

  • Testers, the next time you take a personal day off work, ask your manager to be your backup and actually do some of your work while you’re out.
  • Managers, offer to jump in and be the backup tester.

I caught two Stareast James Bach talks. The “The Myths of Rigor” dealt with when to use rigor and when not to. The main idea (as I understood) was to use more rigor upstream and less downstream. For example, if you're coaching a new tester, you may want to provide them with checklists and lots of details, then encourage them to begin thinking without following said checklists once they grok the concept. Experts are bad at explaining what they know and learners tend to say they understand when they don't; the checklists and details may help, but only at the beginning.

This clicked for me when James asked, “Have you ever written a process document and then not followed it?”. Absolutely! I'm smart enough to understand when to break the rules. How about a test case? Of course! Per James...

  • Rigor at the outcome of a test is optimized for a static well-known world.
  • Rigor at the planning of a test helps you adopt to a changing world.

Writing test cases is valuable as long as we don’t become victims of what James calls “Pathetic Compliance”; following the rules just so we don’t get yelled at, even though we don’t understand the rules. The value in writing test cases is:

  • they are excellent for a quick review before test sessions to get your head straight
  • they are a good tool for discussing tests and understanding each other
  • creating them helps learning

So write test cases but don’t force yourself to use them.

BTW - James Bach is working on a book about how to coach software testers.


The second of James Bach’s talks was a keynote, “The Buccaneer Tester: Winning Your Reputation”. This seemingly dull topic is actually important. The main takeaway for me was:

Making yourself unremarkable does not keep your job safe.

Per James, being good at testing and getting credit for your work are both optional. Sadly, I’ve worked with lots of unremarkable testers. His advice, if you choose to become remarkable:

  • Determine what mix of tester skills you have that nobody else has.
  • Use the above to come up with some kind of vision about testing. It doesn’t even need to be a good vision; bad visions can also give you a reputation.
  • Take a stand on an issue.
  • Participate in public (volunteer) testing.
  • Write, teach, speak, study, and experience more types of testing

After a long day at one of the best software testing conferences I’ve attended, I opened the door to my hotel room and found it full of Stareast keynote speakers, track presenters, and five author/editors of testing books I had recently been reading. It was like some creepy tester fantasy. About half of my favorite tester thinkers had gathered into my hotel room and were conducting lightning talks in front of a flip chart and flat screen TV, some 15 feet from the Queen-sized Murphy bed I sleep in.

This is one of the reasons I love being a software tester. After a bit of networking during my first day at Stareast, I found myself invited to dinner with the Stareast Rebel Alliance, a group of testers who are becoming active in the speaker circuit, blogosphere, and Twitter, attempting to improve the craft of testing. They answered testing questions, gave me an Alliance t-shirt, tried to buy me dinner, and made me feel like family. Special thanks to Alex Kell for introducing me to this crowd.

When Matthew Heusser mentioned he needed a place to host a tester gathering the following night, I suggested my hotel room. After all, the Rosen Shingle Creek had overbooked and given me the parlor suite (maximum occupancy 78). True to their plan, this ambitious group of testers met after the conference and gave lightning talks, provided support and candor, challenged each other with testing games, ate, drank, and were merry. Jon Bach and Michael Bolton were each at different tables, using dice games and puzzles to teach testers better thinking. Adam Goucher, Lanette Creamer, and Matthew Heusser practiced newish lightning talks on tester roles. Shmuel Gershon demonstrated a new session based testing tool he is writing (to be less disruptive to the tester’s concentration). Justin Hunter gracefully demoed his Hexawise test case generation tool. Lisa Crispin and Janet Gregory, co-authors of Agile Testing were there asking questions and providing support. Tim Riley happily discussed various Mozilla testing processes. Agile testing experts Dawn Cannan and Elizabeth Hendrickson also showed up to defend and explain their ideas.

There were many other testers who came and went that night, all were polite, interesting, modest, and fun to hang out with. The last of them left around 2:30 AM some time after winding down and watching a few choice TED talks. I went to sleep with the thick smell of carry-out Indian Food next to my bed.

The next day I woke up and walked barefoot across my 78 occupancy room. I stepped on something. About 5 hours earlier, Michael Bolton was shoving Smartfood popcorn into his mouth and spilling it on the floor. He picked some up saying, “I wouldn’t want you to get Smartfood Foot tomorrow”. I guess he missed a piece.

Lanette Creamer has an excellent overview of the conference on her testyredhead blog. I'll list my personal take-aways in future posts.

Our most complex AUT has no shortage of production bugs. They’re discovered almost daily and our support team forwards them to the rest of the team. These bugs get reported with little factual detail and it’s up to a BA, Dev, or Tester to figure out the repro steps.

Our informal process is, the first person to reply saying “I’m on it” owns the issue and gets to be the hero who figures it out. Determining the elusive repro steps combines many skills; interviewing oracles, listening to users, acting like users, pulling audit trails from the DB using SQL, examining artifacts like user screen captures and error log files, and tracking down user stories or requirements.

Last week one of my testers stopped by my cube, grinning ear-to-ear, and said “I’m so excited! I just figured out the repro steps!” (to a really really challenging prod bug). She sent her repro steps out to the team, they were correct, and within minutes she was thanked and declared a rock star by various team members.

Determining the exact minimum repro steps, is there anything more exciting for a tester?

…well, hopefully. But cracking repro steps is pretty darn exciting!

The above question was asked in response to my Do Developers Make Good Testers? post. Since I am in the process of hiring another tester I thought I would take a stab at it. These are qualities for a fairly generic software testing position.

A good software tester…

  • Constantly asks, “What is the best test I can execute right now”.
  • Can log unambiguous bugs with clear repro steps that make the main problem obvious with few words.
  • Is not distracted by their understanding of developer decisions. Just because the tester may understand certain technology constraints motivating dev solutions, the tester’s mission is never to defend the AUT (see my post, What We Can Learn From Dumb Testers). It is to communicate how the AUT currently works, in areas that matter right now.
  • Has the capacity to understand the stakeholders’ business.
  • Is technical enough to see how one component of a system affects the entire system.
  • Has keen problem solving skills. They can control multiple variables until locating the problematic variable. They have just enough persistence without having too much. They know when to quit and move on.
  • Is an expert communicator and listener who demands complete understanding.
  • Is humble enough to ask all questions (even stupid ones) but cynical enough to seek answers from multiple sources (trust but verify).
  • Is organized enough to follow through with tasks, while at the same time noting potential future tasks.
  • Is capable of isolating observed software behavior, within an ocean of dependencies and communicating those behaviors to the team. They can look at components of an incomplete system and determine actual pros and cons by imagining the complete system.
  • Respects fellow developers and BAs. Understands the harder the tester works, the better the developers/BAs look.
  • Is enthusiastic when finding pre-production bugs but depressed when users find post-production bugs.
  • Can handle stressful deadlines, make quick decisions, and give up preferred processes for those that ultimately are in the stakeholders' best interest.
  • Is an active participant in the software tester community, reads testing books/blogs, and participates in local test groups.
  • Has good work ethic; can meet deadlines or communicate they will be missed, works more than 40 hour weeks when necessary, is organized and professional, cares about the team’s success, honest, follows mandatory work procedures, is SOX compliant, etc.
What have I missed?

I have an open headcount for the highest level tester position at my company. We are hoping to get someone with test automation skills. Most of the candidates are making me yawn. Few are enthusiastic enough to have read anything about their own craft, and fewer are interested in deep discussions.

One resume that grabbed my interest was from a career developer that claims to want to try her hand at being a tester. She is mainly interested in writing test automation code...but she never has. Prior to an interview, my devs and I were enthusiastic. Afterwards, we realized this candidate had no testing experience (other than experimenting with unit tests) and most feared our team could end up with a good developer but a weak test automation stack instead of a good tester who could provide instant gratification via sapient UI tests.

A dev wanting to cross over... How rare of a thing is this? Am I missing a great opportunity by not hiring the dev? Or is this a dev that just couldn’t cut it as an application developer…hoping to cut it as a test automation developer?

I can’t help but wonder, is it easier to teach a developer how to be a good test automator or to teach a good manual tester how to be a good test automator?

A bunch of us participated in three days of “Agile Boot Camp”. Speaker/Agile Coach/President of DavisBase, Steve Davis came and attempted to enlighten our department by teaching us what Agile development is all about. My department has been practicing its own flavor of Agile development for about four years but many of us have felt it’s time to adopt more Agile ideals. For starters, QA is still one iteration behind dev.

For me, the class was mostly info I had heard before. I hoped to collaborate with others and help them grasp new ideas. I was frustrated about 50% of the time as I watched many of my higher level team members look up from their Blackberrys and iPhones throughout the class to say, “Right, Steve, but that’s not the way we do things here”. Sigh.

Nevertheless. Here are some of the tidbits and ideas I wrote on the back of my tent card. Many of these are related to my new role as QA Manager.

  • Make my testers feel like they are part of the solution. Give them missions and get out of the way. Don’t assign specific tester tasks. Let them work for their team. I will say “I did not hire you to please me. I hired you to please your team”.
  • If the above is true, what is my role as a QA Manager? To mentor, teach, and figure out how to make my test team the best team there is.
  • Assign each of my testers to a permanent dev team instead of bouncing them between teams. Locate each tester with their dev team. The longer they are a team, the more likely they will develop a cadence (natural rhythm). The cadence will allow them to stop worrying about process and concentrate on what they do best…testing!
  • Get the hardest tests done first.
  • Write the high level tests during the Feature walkthrough.
  • Writing user stories. Don’t get too far ahead on capturing detail. If you do, you are approaching the waterfall methodology.
  • If you are not going to have enough time to test, bring that up in your daily scrum ASAP so others can help.
  • Daily scrum. Stop looking at the lead or scrum master as you report. Look at your team members. Your report is for them.
  • Don’t wait until the end of the iteration to get anxious. Rally around the goal from day 2. Keep the energy high.
  • Eric Idea: Post test status inside bathroom.
  • Eric Idea: Testers evaluate each other. (this may not work if they don’t work together.)

I’ll report back later in the year with updates on how we’re doing.

Tester Reputation Cheats

How well do you know your testers? There are several tester stunts I have been tempted to pull (and sometimes have). The act of testing is difficult to monitor so it is easy for testers to spoof productivity. If you catch yourself doing these…don’t.

  • Whenever team members stop by your desk, make sure you have a GUI automation test running on one of your boxes. It looks really cool and gives the appearance that you are an awesome tester (they’ll never know it’s just a looping record/playback test with no verifications).
  • Your manager needs to see test cases. How about just copying the requirements into something called a test case?
  • You didn’t stumble upon that bug (yes you did). It came from a carefully planned test.
  • There is no evidence of what tests you executed because you kept track of them locally (not really). You will upload the tests to the repository when you have time (yeah, right).
  • You didn’t forget to log that bug your dev is asking about (yes you did). You are still reducing the repro steps down to the bare minimum due to some variables that may be related.
  • You didn’t forget to execute those tests (yes you did), you choose not to execute them because you performed a risk assessment matrix resulting in a lower priority for said tests.
  • When running performance tests using your wrist watch, record time span values out to the millisecond. It appears more accurate.
  • Does your manager review your test cases or just see how many you have written? If it’s the later, find the copy/past option in your test case repository and change one value (e.g., now pass in “John”, now pass in “Pat”, now pass in “Jill” etc.). Dude, you just wrote 30 test cases! It looks great on the metrics summary page.
  • If you’re lost at the Feature Walkthrough meeting, periodically raise your hand and ask if there are any “ility” requirements (e.g., Scalability, Usability, Compatibility, etc.)…it just sounds cool to ask.
  • You don’t have a clue how this feature is supposed to work. If you wait long enough, another tester will take care of it.
Have you noticed any additional stunts testers pull?

You probably think this is just another post about how important it is to test everything before it goes to production. Nope.

Testers are too often expected to test things they don’t have a chance in hell at testing. Testing for the sake of process or perception, provides little to no value and degrades the whole trade of testing.

Sometimes when people ask,

“What are the testers going to do to test it?”, I respond,
“Nothing…we’ll just rubber-stamp it.”

Devs usually laugh because they know the bug risk is ridiculously low or the tester does not have the skills to test anything beyond what the dev already tested. However, other testers, managers, or BAs react with horror.

“Rubber-stamp it? Blasphemy! You’re the tester. EVERYTHING must be tested by you before going to production!”

The term “rubber-stamping” invokes negative reactions because it brings up the mental image of the desk clerk stamping document after document without paying any attention to what is on them…like the tester marking “Verified” on the bug or feature they didn’t really do anything to test. But that’s why I like the term! I’m trying to be honest about what value the tester added…none.

Here are some examples where the rubber-stamper-tester is justified:

  • The tester has inadequate skills to perform the tests but has interviewed someone else on the team (usually the devloper) who did test.
  • The test is not feasible to recreate in a non-prod environment (e.g., a complex multi-user scenario, custom PC environments, unknown repro steps)
  • The patch can only be verified in debug mode, using complex timing and coordination requiring breakpoints and other unrealistic tricks. Even devs may skip testing if regression tests pass.
  • If critical functionality is broken in prod, we may decide to release a fix without testing it. Speed becomes paramount in this scenario. We are smart...it is possible that we are smart enough to see a logic error, make the fix, take a deep breath and release to prod without the overhead of testing. After all, it can’t get any worse in prod, right?
It’s fun to say our job as testers is to be cynical, to not trust anybody. But we shouldn’t abuse that belief and become mere team bottle-necks, either.

Does one require more start-up time than the other?

My department is gearing up for a spike in development this summer. We plan to use temporary, contractor developers and testers.

IMO, the contractor devs do not need to know nearly as much about our systems as the contractor testers do. The devs will be able to make their updates with limited business domain or system knowledge. However, the testers will need to go way beyond understanding single modules; they will need extensive business domain and system knowledge in order to determine what integration tests to write/execute.

Some on my team have a notion that testers can come in and be valuable by running simple tests, with limited knowledge. It scares me so much I would almost consider giving up my entire summer, moving into the office, and doing all the testing myself.

I’m also wondering if contractor testers should be paired with veteran devs and vice versa. If we have contractor testers working with contractor developers, as is the plan, it sure seems like a recipe for disaster.

Have any of you experienced something similar? What advice do you have for me?


I’ve spent a good deal of time underground the last 13 years…literally. One of my favorite weekend activities is caving. New caves are discovered nearly every weekend in the northwest Georgia area and responsible cavers survey these caves, make maps, then submit the data to their state’s speleological survey library.

Cavers are very methodical when it comes to finding virgin passage. Underground ethics specify that cavers survey (with tape, compass, clinometer, and sketchbook) the new passage as they explore it. It is frowned upon to just run through a new cave without performing a proper survey on the way in. Exploring without an initial survey is known as “scooping” or “eye-raping” a cave and it is a sign of an irresponsible caver (sometimes called a “spelunker”).

The responsible caver learns everything about the new cave by surveying as they go. The survey process forces them to examine all features of the passage carefully, often discovering new leads, which otherwise would have been missed. This patient approach keeps the caver fresh with anticipation about the wonderful cave lying ahead.

The irresponsible caver, who runs down virgin passage into the dark unknown, only experiences the obvious way forward. They assume they will backtrack to check for leads. In practice, they may grow fatigued or bored with the cave and never return. They have not collected enough data to qualify the cave with the state survey. They can brag to friends about a deep pit and borehole passage. But they cannot tell other cavers to bring a 250-foot rope because the pit is 295-feet deep, or that the big formation room is a half mile in on the northeast end of a 40-foot wide dome room. They don’t know which leads have been checked or how likely it is that this cave drains into a nearby cave further down the mountain. They have no hard facts about the cave; only memories, which fade very quickly.

A tester's approach to new AUT (Application Under Test) software features should be much the same as those cavers who survey as they explore. As a tester, my tools are my tests. And yes, for complex scenarios, I like to write the test before I perform it. At times, I want to scoop the application, to find the bugs before the other team members do. But I try to reign myself in. I keep track of what I have checked as I go in. I remember how satisfying it is to present team members with a list of tests performed and their results; so much more satisfying than saying “I’m done testing this”.

If your AUT has logic that checks for null values, make sure removed values get set back to null. If they get set to blank, you may have a nice crunchy bug.

Here are the pseudo steps for the test case using a generic order form example. But you can use anything in your AUT that programmatically makes a decision based on whether or not some value exists.

Feature: Orders with missing line item quantities can no longer be submitted.

  1. Create a new line item on an order. Do not specify a quantity. Crack open the DB and find the record with the unpopulated quantity. Let’s assume the quantity has a null value.
  2. Populate the quantity via the AUT .
  3. Clear the quantity via the AUT. Look at the DB again. Does the quantity have a blank or null value?

    Expected Results: Quantity has a null value.

If your dev only validates against null quantity values, your user just ordered nothing...yikes!

We’ve been interviewing to fill a couple QA positions on our team. My favorite part of each interview is my “test this light switch” exercise. It reveals interesting skills about each test candidate.

I point to the light switch in the room and say “test this light switch”. Here is a sampling of how candidates have responded:

  • some asked if there are any requirements (this is a great way to start!)
  • some just start testing with lots of assumptions (no so great)
  • one candidate smiled and thought I was kidding. After asking lots of questions to prime him, he stared uncomfortably at the light switch and offered me close to nothing (embarrassing for both of us)
  • one candidate walked up to the light switch and began testing it as she walked me through her thought process. After some solid high level tests, she wanted to see electrical schematics for the building and asked me all kinds of questions about emergency backup power, how many amps the room’s lights draw, and what else was on that circuit. She wanted to remove the trim plate to check the wiring for electrical code standards. She asked if the room’s lights could be controlled by a master switch somewhere else or an energy saver timer for off-hours. (these types of questions/tests make her a good fit for my team because my AUT’s weakest test area is its integration with other systems)
  • one candidate was good at coming up with cosmetic and usability tests (e.g., Is the switch labeled well? Can I reach it from the doorway when the room is dark? Does the trim plate match the room’s trim in color and style?)…not so important for my AUT but good tests for others perhaps.
  • one candidate went right for stress tests. He flipped the lights on/off as quickly as he could. He tried to force the switch to stay in the halfway-off-halfway-on position to see if it sparked or flickered the lights.
More was revealed about the confidence of each candidate, their creativity, how technical their brain was, how quickly their mind worked, their persistence, and finally how interested they were in determining their mission and what I thought was important to know.

Most bug reports include Severity and Priority. On my team, everyone is interested in Priority (because it affects their work load). Severity is all but ignored. I propose that testers stop assigning Priority and only assign Severity.

Priority is not up to the tester. It is usually a business decision. It wastes the tester’s time to consider Priority, takes this important decision away from someone more suited to make it, and finally, it may misguide workflow.

Bugs without Priority have to be read and understood by the customer team (so they can assign priority themselves). This is a good thing.

What about blocking bugs, you ask?

Some bugs are important to fix because they block testing. These bugs are best identified as blocking bugs by testers. They can be flagged as “Blocking Bugs” using an attribute independent of the Priority field. Think about it…if BlockingBugA is blocking testing that is less important than other, non-blocked testing, perhaps BlockingBugA only deserves a low priority.

Tell me where I’m wrong.

I recently read about 15 resumes for tester positions on my team. None of them told us anything about how well the candidate can test.

Here is what I saw:

  • All candidates list a ton of ”technologies” they are familiar with (e.g., .Net, Unix, SQL, XML, MS Office)
  • They also list a bunch of off-the-shelf testing tools (e.g., TestDirector, LoadRunner, QuickTest Pro, SilkTest, BugZilla)
…So far I don’t know anything about how well they can test.
  • All candidates string together a bunch of test buzz words…something like, “I know white box testing, gray box testing, black box testing, stress testing, load testing, functional testing, integration testing, sanity testing, smoke testing, regression testing, manual testing, automated testing, user acceptance testing, etc.”
…as if I would be thinking, “yes, but do you know Glass Box Testing? That’s really what we’re looking for.”
  • Some candidates will say something like “I wrote a 50-page test plan”, or “I’m responsible for testing an enterprise application used by 1000 users”
…okay, so how well can you test? To be fair, this is a difficult skill to convey in a resume and perhaps I am just not good at reading between the lines and determining which candidates would thrive as testers on my team. However, the candidate would have probably gotten an instant interview if they had included any of these:
  • My approach to testing is as follows…
  • See my software testing blog for my opinions on how to test.
  • My favorite testing books and blogs are…
  • I enjoy testing because…
Sigh. I guess the modern resume has not advanced far enough to reflect candidate traits to said extent. That's why the interview questions will be so important. I get to participate in my first interview tomorrow and I have come up with a list of fun questions/activities to help me see how well the candidate tests. One of them will be to "Test that light switch over there on the wall completely." (if the candidates are cool enough to read my blog, they'll have a head start).

What are your favorite tester interview questions?



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.