Showing posts with label metrics. Show all posts
Showing posts with label metrics. Show all posts

I think it’s only people who experience bugs.

Sadly, devs, BAs, other testers, stakeholders, QA managers, directors, etc. seldom appear interested in the fruits of our labor.  The big exception is when any of these people experience a bug, downstream of our test efforts.

“Hey, did you test this?  Did it pass?  It’s not working when I try it.”

Despite the disinterest, us testers spend a lot of effort standing up ways to report test results.  Whether it be elaborate pass/fail charts or low-tech information-radiators on public whiteboards, we do our best.  I’ve put lots of energy into coaching my testers to give better test reports but I often second guess this…wondering how beneficial the skill is.

Why isn’t anyone listening?  These are some reasons I can think of:

  • Testers have done such a poor job of communicating test results, in the past, that people don’t find the results valuable.
  • Testers have done such a poor job of testing, that people don’t find the results valuable.
  • People are mainly interested in completing their own work.  They assume all is well with their product until a bug report shows up.
  • Testing is really difficult to summarize.  Testers haven't found an effective way of doing this.
  • Testing is really difficult to summarize.  Potentially interested parties don’t want to take the time to understand the results.
  • People think testers are quality cops instead of quality investigators; People will wait for the cops to knock on their door to deliver bad news.
  • Everyone else did their own testing and already know the results.
  • Test results aren’t important.  They have no apparent bearing on success or failure of a product.

The second thing (here is the first) Scott Barber said that stayed with me is this:

The more removed people are from IT workers, the higher their desire for metrics.  To paraphrase Scott, “the managers on the floor, in the cube farms, agile spaces or otherwise with their teams most of the time, don’t use a lot of metrics because they just feel what’s going on.”

It seems to me, those higher up people dealing with multiple projects don’t have (as much) time to visit  the cube farms and they know summarized information is the quickest way to learn something.  The problem is, too many of them think:

SUMMARIZED INFORMATION = ROLLED UP NUMBERS

It hadn’t occurred to me until Scott said it.  That, alone, does not make metrics bad.  But it helps me to understand why I (as a test manager) don’t bother with them but I spend a lot of time fending off requests for them from out-of-touch people (e.g., directors, other managers).  Note: by “out-of-touch” I mean out-of-touch with the details of the workers.  Not out-of-touch in general.

Scott reminds us the right way to find the right metric for your team is to start with the question:

What is it we’re trying to learn?

I love that.  Maybe a metric is not the best way of learning.  Maybe it is.  If it is, perhaps coupling it with a story will help explain the true picture.

Thanks Scott!

At this week’s metric themed Atlanta Scrum User’s Group meetup, I asked the audience if they knew of any metrics (that could not be gamed) that could trigger rewards for development teams.  The reaction was as if I had just praised Planned Parenthood at a Pro-life rally…everyone talking over each other to convince me I was wrong to even ask.

The facilitator later rewarded me with a door prize for the most controversial question.  What?

Maybe my development team and I are on a different planet than the Agile-istas I encountered last night.  Because we are currently doing what I proposed, and it doesn’t appear to be causing any harm.

Currently, if 135 story points are delivered in the prior month AND no showstopper production bugs were discovered, everyone on the team gets a free half-day-off to use as they see fit.  We’ve achieved it twice in the past year.  The most enthusiastic part of each retrospective is to observe the prior months metrics and determine if we reached our “stretch goal”.  It’s…fun.  Let me repeat that.  It’s actually fun to reward yourself for extraordinary work.

Last night’s question was part of a quest I’ve been on to find a better reward trigger.  Throughput and Quality is what we were aiming for.  And I think we’ve gotten close.  I would like to find a better metric than Velocity, however, because story point estimation is fuzzy.  If I could easily measure “customer delight”, I would.

At the meeting, I learned about the Class of Service metric.  And I’m mulling over the idea of suggesting a “Dev Forward” % stretch goal for a given time period.

But what is this nerve I keep touching about rewards for good work?

On weekends, when I perform an extraordinary task around the house like getting up on the roof to repair a leak, fixing an electrical issue, constructing built-in furniture to solve a space problem, finishing a particularly large batch of “Thank You” cards, or whatever…I like to reward myself with a beer, buying a new power tool, relaxing in front of the TV, taking a long hot shower, etc.

Rewards rock.  What’s wrong with treating ourselves at work too?

See Part 1 for intro.

  • People don’t make decisions based on numbers, they do so based on feelings (about numbers).
  • Asking for ROI numbers for test automation or social media infrastructure does not make sense because those are not investments, those are expenses.  Value from an automation tool is not quantifiable.  It does not replace a test a human can perform.  It is not even a test.  It is a “check”.
  • Many people say they want a “metric” when what they really want is a “measurement”.  A “metric” allows you to stick a number on an observation.  A “measurement”, per Jerry Weinberg, is anything that allows us to make observations we can rely on.  A measurement is about evaluating the difference between what we have and what we think we have.
  • If someone asks for a metric, you may want to ask them what type of information they want to know (instead of providing them with a metric).
  • When something is presented as a “problem for testing”, try reframing it to “a problem testing can solve”.
  • Requirements are not a thing.  Requirements are not the same as a requirements document.  Requirements are an abstract construct.  It is okay to say the requirements document is in conflict with the requirements.  Don’t ever say “the requirements are incomplete”.  Requirements are not something that can be incomplete.  Requirements are complete before you even know they exist, before anyone attempts to write a requirements document.
  • Skilled testers can accelerate development by revealing requirements.  Who cares what the requirement document says.
  • When testing, don’t get hung up on “completeness”.  Settle for adequate.  Same for requirement documents.  Example: Does your employee manual say “wear pants to work”?  Do you know how to get to your kid’s school without knowing the address?
  • Session-Based Test Management (SBTM) emphasizes conversation over documentation.  It’s better to know where your kid’s school is than to know the address.
  • SBTM requires 4 things:
    • Charter
    • Time-boxed test session
    • Reviewable results
    • Debrief
  • The purpose of a program is to provide value to people.  Maybe testing is more than checking.
  • Quality is more than the absence of bugs.
  • Don’t tell testers to “make sure it works”.  Tell them to “find out where it won’t work.”  (yikes, that does rub against the grain with my We Test To Find Out If Software *Can* Work post, but I still believe each)
  • Maybe when something goes wrong in production, it’s not the beginning of a crisis, it’s the end of an illusion.

My former manager and esteemed colleague asked me to teach a two hour class about Session Based Testing (SBT). We had tried SBT a couple years ago, when I was fresh out of Michael Bolton’s excellent Rapid Software Testing course.

I was nervous as hell about the class because most of the testers I work with were signed up and I knew this was an opportunity to either inspire some great testing or look like a fool. So I spent several weeks researching and practicing what I would teach. I decided an Exploratory Testing (ET) primer was necessary for this audience before SBT could be explained properly.

ET proved to be the most intimidating subject to explain. Most of what I found was explained by members of the Context-Driven School (e.g., James and Jon Bach). Nearly everything I found NOT explained by members of the Context-Driven School was heavily criticized (by members of the Context-Driven School) for not being true ET. With all this confusion over what ET actually is, one wonders how well the Context-Driven School has actually explained what they mean. I found various statements from videos, blogs, papers, and my RST courseware that ranged from...

  • It’s a technique…no it’s a method…no it’s a “way of testing”.
  • It’s the opposite of scripting…no, it can used with scripting too, even while automating.
  • All testers use ET to some extent…no wait, most testers aren’t using it because they don’t understand it.
After (hopefully) explaining ET, I was easily able to transition into SBT, making the case that SBT solves so many of the problems introduced by poorly conducted ET (e.g., lack of artifacts and organization). I explained the essential ingredients of SBT:
  • Time Boxing
  • Missions
  • Capturing Notes, Bugs, and Issues
  • Debriefing
Then I demonstrated my favorite SBT tools:
In the end, about half the audience appeared luke warm while the other half appeared skeptical to confused. I blame it on my own delivery. I think more light bulbs went off during the ET section. SBT takes a bit more investment in thought to get it.

For myself, however, the class was a success. Ever since my research, I’ve actually been using SBT and I love it! I also have some better ideas on how to teach it if I ever get a second chance. Special thanks to Michael Bolton and James Bach, who continue to influence my testing thoughts in more ways than anyone (other than myself).

During last fall’s STPCon, I attended a session about showing your team the value of testing. It was presented by a guy from Keen Consultants. He showed us countless graphs and charts we could use to communicate the value of testing to the rest of our team. Boring…zzzzzzzz.

In the spirit of my previous post, Can You Judge a Tester by Their Bug List Size?, here is a more creative approach, that is way simpler and IMO more effective, at communicating your value as a tester….wear it!

(I blurred out my AUT name)

You could change it up with the number of tests you executed, if that sounds more impressive to you. Be sure to wear your shirt on a day the users are learning your AUT. That way, you can pop into the training room and introduce yourself to your users. Most of them didn’t even know you existed. They will love you!

Now I just need to come up with an easy way to increase the bug count on my shirts (e.g., velcro numbers). Because, like all good testers know, the shirt is out-dated within an hour or so.

The first Agile Manifesto value is…

“Individuals and interactions over processes and tools”

While reading this recently, something radical occurred to me. If I practice said value to its fullest, I can stop worrying about how to document everything I test, and start relying on my own memory.

I hate writing test cases. But the best argument for them is to track what has been tested. If someone (or myself) asks me “Did you test x and y with z?”, a test case with an execution result seems like the best way to determine the answer. However, in practice, it is not usually the way I answer. The way I usually answer is based on my memory.

And that, my friends, is my breakthrough. Maybe it’s actually okay to depend on your memory to know what you have tested instead of a tool (e.g., Test Director). But no tester could ever remember all the details of prior test executions, right? True, but no tester could ever document all the details of prior test executions either, right? To tip the balance in favor of my memory being superior to my documentation skills, let me point out that my memory is free. It takes no extra time like documentation does. That means, instead of documenting test cases, I can be executing more test cases! And even if I did have all my test case executions documented, sometimes it is quicker to just execute the test on the fly than go hunt down the results of the previous run (quicker and more accurate).

It all seems so wonderful. Now if I can figure out how to use my memory for SOX compliancy... shucks.

Management wants to know the state of the AUT but they don’t really know what questions to ask. Worse yet, when they do ask…

  • How does the build look?
  • How much testing is left?
  • What are the major problems?
…I don’t know how to provide the simple answer they want. Well, my team has been using a successful little trick that is super easy to implement.

We listed our modules on an old fashioned white board in an area that gets people traffic. Every three weeks, on build day, we run the smoke tests on the new build and if all tests for a given module pass, the module gets a little green smiley face drawn next to it. If any tests for a given module fail (to the extent said module is unaccepted), the module gets a sad red face. Finally, if any tests for a given module fail but we can work around the problems and accept the module, we draw a blue straight face.
The white board slowly gets updated throughout the course of the day by me and my other QA colleagues as we complete smoke tests for each module. Satish looks happy in this picture because we were having a good build day. The various managers and dev leads naturally walk past the white board throughout the day and have instant knowledge of the state of the build. It shields QA from having to constantly answer questions. Instead, we hear fun remarks like “Looks like you finally got a big smiley on System Admin Server, Stephanie, that’s a relief!” or “What’s up with all the red sad faces on your server solutions, Rob? ”.

Our build day white board was inspired by James Bach’s Low-Tech Dashboard, which contains some really cool ideas, some of which my team will experiment with soon. Michael Bolton introduced this to me in his excellent Rapid Software Testing class. Bach’s Low-Tech Dashboard is more complex but in exchange, it fends off even more inquisitive managers.

If your company is obsessed with portals, gantt charts, spreadsheets, test case/defect reports, and e-mails, drawing smiley faces on a white board may be a refreshing alternative that will require less administrative work than its high-tech alternatives.



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.