The users found a bug for something that was marked “PASSED” during the test cycle. I made this remark to my programmers and one of them said “Ha ha, testers just mark tests ‘passed’ without doing anything”.
He was joking but he raised a frightening thought. What if he were right? What if you gave testers a list of 100 tests and they just marked them PASSED without operating the product? How would you know? The sad truth is, 95 of those tests probably would have passed, had the product been operated, anyway. The other 5 could arguably have been tested under different conditions than those deemed to raise the bug. And this is what makes testing both so unsatisfying to some testers and so difficult to manage.
If you can get the same result doing nothing, why bother?
We have an on-going debate about working from home. Let a programmer work from home and at the end of the day you can see what code they checked in. You have something substantially more tangible than that of a tester (even if that tester takes detailed test notes). It’s not an issue of trust. It’s an issue of motivation.
I believe testers who are uninterested in their job can hide on test teams easier than other teams, especially if their managers don’t take the time to discuss the details of what they actually tested. They may be hiding because they’ve never gotten much appreciation for their work.
Having a conversation with a tester about what they tested can be as easy as asking them about a movie they just saw:
  • How did George Clooney do?
  • Was it slow paced?
  • Was the story easy to follow?
  • Any twists or unexpected events?
  • Was the cinematography to your liking?
If they didn’t pay much attention to the movie, you’ll know.

4 comments:

  1. Shaun Hershey said...

    I work on a test team where we are working closely with a devlopment team on a product rewrite. It's sort of an agile project, but the "scrum master" plays a little loosely with the process. The goal is to have all functionality working as it previously did, be backwards compatible with any existing data a customer will port over during an update, and include some new desired functionality.

    That's the basic set up. Each day after our standup, our Team Leader will divvy up new QA tasks and set us off to work. Originally there was no sort of follow up. We'd do our testing on our own and close out our tasks as we deemed them complete.

    Fast forward to about a month ago and a major issue was uncovered in a feature that was closed out as complete. The tester who had been working on testing the feature had assumed that since the devs were basically just porting the code over, that the existing functionality would be in place by default. This wasn't the case, and had it not been caught, we would have had many unhappy customers. Of course the developers had a field day with this despite the fact that they were unable to port code successfully in the first place, and it reflected very poorly on our team as a whole.

    We decided in the long run that before a task could be closed out as completed, we would get together as a team and discuss what had been tested, which browsers minor issues had been discovered in, how fully backwards compatibility had been tested, etc... All notes taken by the tester would be reviewed, and it ended up getting the offending tester a lot more involved in the process. Since then, we've had no such issues and have found arguably more important bugs now then we were finding before. Now that the other testers are being held accountable for what they test instead of just being given a mission and being sent out on their way, it's helping keep the team focused and productive.

  2. Gaurav Pandey said...

    I can understand the challenge you are facing.
    There is one aspect you may also want to consider. The comparision between Expected and Actual Request is performed by a human being. What if the tester did this analysis incorrectly? aka - the software is not working as expected, however the tester feels it is working fine. In some technical jargaon, it may be considered as "false negatives".

    This can be a night mare for a test manager.

    Some suggestions on how to handle the problem (including the one you are considering)

    1. Get the test case reviewed. Also get the test logs reviewed.
    2. Get the requirements reviewed. Test environment. If these changes dynamically during the project, it is quite possible that the tester might perform false negatives.
    3. Ensure that correct test data is used. The software might work on a particular set of test data. It might not work on another set of test data. For this, we use multiple test data.
    4. This is my favourite and most interesting. Consider error seeding. In this the test manager injects some defects into the code (in discussion with the dev team). If the testing team is able to discover these defects then some testing intensity may be assumed. This concept in jargaon is known as mutation testing. The primary purpose is to find the effectiveness of test cases.

    Hope these helps

    Regards
    Gaurav Pandey

  3. Eric Jacobson said...

    Thanks for the good ideas, Gaurav. Yeah, your fourth choice is cool. I've done that type of thing for automated tests but I've never tried it for humans. It sounds fun, although it would take a lot of oversight and planning with the programmers. It also introduces the risk that we accidentally leave the seeded bugs in the build.

  4. Eric Jacobson said...

    Shaun, that's a great story. It inspired me to keep at it with this test review stuff. Thanks!



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.