I think it’s only people who experience bugs.

Sadly, devs, BAs, other testers, stakeholders, QA managers, directors, etc. seldom appear interested in the fruits of our labor.  The big exception is when any of these people experience a bug, downstream of our test efforts.

“Hey, did you test this?  Did it pass?  It’s not working when I try it.”

Despite the disinterest, us testers spend a lot of effort standing up ways to report test results.  Whether it be elaborate pass/fail charts or low-tech information-radiators on public whiteboards, we do our best.  I’ve put lots of energy into coaching my testers to give better test reports but I often second guess this…wondering how beneficial the skill is.

Why isn’t anyone listening?  These are some reasons I can think of:

  • Testers have done such a poor job of communicating test results, in the past, that people don’t find the results valuable.
  • Testers have done such a poor job of testing, that people don’t find the results valuable.
  • People are mainly interested in completing their own work.  They assume all is well with their product until a bug report shows up.
  • Testing is really difficult to summarize.  Testers haven't found an effective way of doing this.
  • Testing is really difficult to summarize.  Potentially interested parties don’t want to take the time to understand the results.
  • People think testers are quality cops instead of quality investigators; People will wait for the cops to knock on their door to deliver bad news.
  • Everyone else did their own testing and already know the results.
  • Test results aren’t important.  They have no apparent bearing on success or failure of a product.

11 comments:

  1. Anonymous said...

    Everything seems to be the tester's fault.

    In your last post you say "Four bugs, caused by a large refactor, were missed in test."

    Test wasn't the only one who missed those bugs. Do programmers have no skin in the game?

    In this post you say "Testers have done such a poor job of communicating test results" - how so?

    "Testers have done such a poor job of testing, that people don’t find the results valuable" - Programmers don't test their own work? Sounds like a very dysfunctional organization.

    "Testing is really difficult to summarize. Testers haven't found an effective way of doing this." - What's so difficult about keeping a daily, weekly, or iteration-based log? Don't the testers meet occasionally to discuss issues? Isn't there an expectation on the part of management to receive timely summaries of recent work?

    "Potentially interested parties don’t want to take the time to understand the results." - it depends on how your organization is structured. If you have an invested testing manager, they should surely be interested. If testers are treated as an annoying appendage of a development-centric office, then that's bound to occur.

    "People think testers are quality cops instead of quality investigators" - depends on the corporate culture.

    "Test results aren’t important. They have no apparent bearing on success or failure of a product." - I've seen this at some companies. Test is disdained. A necessary budgetary item, but disregarded and disrespected.

  2. Srinivas Kadiyala said...

    Can u share, how to write effective test results / status reports to send as daily test report?

  3. Rikard Edgren said...

    I think the reason for not listening starts much earlier.
    If you aren't involved in what will be tested and why, you probably aren't interested in the results.
    But if testers anchor their test strategies, and act on feedback from stakeholders, test reporting is much easier.
    People that influence what will be tested, automatically become interested in what the results are.

    "Testing is really difficult to summarize" is true, but this also becomes easier when you know what stakeholders are interested in.

  4. Eric Jacobson said...

    Anonymous,

    That's not my belief at all. Although, I do think testers should be accountable to an extent. When I say, "Four bugs, caused by a large refactor, were missed in test.", what makes you think I blame testers? In my experience, the whole dev team participates in some testing.

    As far as this post. I said, "these are some reasons I can think of". That is not the same as saying these are the reasons I believe are the cause. Critical thinking involves challenging ourselves to ponder a variety of explanations. In this case, I haven't decided which, if any, I believe.

  5. Eric Jacobson said...

    Srinivas, ...probably not. If "effective" means those test results have an eager audience.

  6. Unknown said...

    When you work in a bug infested SDLC defects drive everything. Developers don't care about anything until a tester raises a bug on some of their code; BA's don't care until you raise a bug against their requirements; and the business doesn't care until there is a production bug that costs them $$. So in these types of environments you are correct that the only people who care about test results are those negatively affected by them.

    If all you are reporting is defects, you become nothing more than the bearer of bad news. And no one likes negative news. Good test reporting involves reporting much more than just defects (build rates, 1st pass rates, testing progress, etc.). All can be very valuable to specific target audiences and provide a bigger picture of testings overall results.

    One of the main reasons agile has caught on so much is because everyone began to realize how costly and wasteful this approach was. Now, rather than focusing on the "what" of the defect, agile teams focus more on why it was created and what can they do to prevent the defect from happening again. I wrote about his in my blog (http://goofytesterguy.blogspot.com/2014/06/a-better-way-of-handing-defects.html). Focus on prevention rather than fixing.

    The key point I think you are missing is why is the tester coming up with what the results should look like? The approach I just mentioned is a team approach to solving the problem, and the team is who determines what test reports/metrics are needed for them to deliver better. As the full testing stack is executed, teams use these results to identify inefficiencies in the process rather than simply identify code that doesn't work as expected. Everyone takes an interest in these test results as they know the outcome will make their team better.

    Rather than asking the question "why isn't anyone listening?" ask the question "what test results would be of interest to you?" And ask everyone on the team, including the business. I think you will find that everyone does have an interest and can benefit in some way from test result data when it is of interest to them.

  7. Anonymous said...

    That's not my belief at all. Although, I do think testers should be accountable to an extent. When I say, "Four bugs, caused by a large refactor, were missed in test.", what makes you think I blame testers? In my experience, the whole dev team participates in some testing.

    When you say, "missed in test", you are blaming testers. If the "whole dev team" participates in testing then say "the whole dev team missed four bugs" because that's really what happened.

    Testers should be accountable to an extent? What sort of accountability? And what about programmers? Don't they test their code before letting testers look at it? Did Test have enough time after receiving a build to thoroughly test? We all know how development runs late and managers have hard dates and how Test gets squeezed.

    You seem to be working in a very weird and dysfunctional corporate environment, but typical for the industry. And Matthew Eakin, Agile is no cure when the attitude is this engrained. Been there seen that.

  8. Unknown said...

    @Anonymous, I never said agile was the cure. However, the fact that more and more companies are "going agile" is an indication to me that more and more companies believe it is. Waterfall allows each silo to not care about anyone else, unless there is a problem. Agile forces everyone to care about the results because cards don't move without positive test results.

    @Eric - I had a fun lunch the other day with a colleague talking about metrics. One topic was target audience and it got me thinking about this post. Knowing your target audience is KEY to reporting test results. And KEY to getting people to care.

  9. Anonymous said...

    @Anonymous, I never said agile was the cure. However, the fact that more and more companies are "going agile" is an indication to me that more and more companies believe it is. Waterfall allows each silo to not care about anyone else, unless there is a problem. Agile forces everyone to care about the results because cards don't move without positive test results.

    Companies follow trends, especially big companies. Dysfunctional companies do even weirder things in that they'll say they're "going Agile" and then barely change their dysfunctional practices. Kind of like Dilbert. Maybe they'll tack on a daily standup or have never-ending iterations. They'll call it Agile.

    A lot of you Agile proponents seem to want to call a dysfunctional corporate SDLC "Waterfall", but I'm not sure if you know the difference.

    I've worked in good Waterfall and bad Waterfall. I've worked in almost-good Agile and so bad I can't-believe-they-call-this Agile. I've also worked in a lot of corporate dysfunction and reactionary insanity.

    Agile proselytizers, as opposed to proponents, almost universally ignore corporate dysfunction. I've seen this numerous times with all the "consultants" brought in to various workplaces I've been in the past decade. Agile proselytizers never point to a list of successful software projects created using Agile, other than providing a laundry list of giant companies who say they're "Agile".

    Back to the column here, I get a whiff in his writing that Mr Jacobson is working in a dysfunctional environment, and/or has acquired some sort of disdain for certain testers. Maybe I'm wrong, but I get the feeling something is under the rock.

  10. Eric Jacobson said...

    People not caring about test results is not an Agile or Waterfall problem, IMO. I've experienced it in each. And I don't consider my teams dysfunctional.


    Matthew, thanks for your suggestions. Yes, I've asked that question, "what test results would be of interest?". The answers are usually disappointing. People want the test results summed up into a graph or a number. IMO, that results in misinformation. Test results are not something that can be communicated by mere numbers or charts.

    When your wife asks, "How was your day?" Try answering with a chart. The sad thing is, if your wife is short on time, she may actually prefer a chart.

  11. Unknown said...

    In one of the best retrospectives I've ever conducted I had everyone give a smiley face :-), neutral face :-|, or frowny face :-( to show how they felt about each suggestion. It was so successful (and humorous) that we began letting the business folks use it in the Show-and-Tells to indicate how they felt about the feature; happy=100% dead-on, neutral=pass, but needs additional work, frown=start over. There are obviously details underneath, but the message is simple and easy for everyone to understand. Know your audience and know what graphs/charts/pictures tell the best story. Even your wife understands a happy/neutral/frowny face.



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.