We did our first tester Lightning Round yesterday, which consisted of ten testers, who each had five minutes to make their point to a room full of other testers. To keep it fun, and I think, in the tradition of typical Lighting Rounds, we displayed a digital clock on the wall to track each speaker’s time. When a speaker used more than their five minutes, we threw squishy balls at them to make them stop.

The point of a Lightning Round is to give the audience one specific action item, without them having to endure a one hour lecture. This also gives the audience a variety of topics in a short time span. The advantage for the speakers is the short preparation time required to speak for five minutes.

Yesterday’s Lightning Round was an experimental version of something we’ve been doing for the last couple of years at my company. Testers from various projects and departments meet to exchange ideas. But normally we meet to attend an hour-long presentation about a single testing topic. Yesterday’s feedback indicated Lightning Rounds were way better to attend than the usual hour-long-single-topic session. Although, several testers said five minutes was too short. Some suggested five speakers with ten minutes each.

Personally, I think it was the five minute limit that kept the energy high and kept the audience from getting bored. But I’m willing to compromise. The larger problem, however, is getting testers to volunteer for these. I had to talk five of the speakers into doing it just to get my ten. Apparently, there are few testers willing to share their ideas with any conviction.

Anyway, the topics in yesterday’s section were the following:

  • Stop Writing Test Cases
  • Regression Testing Importance – Fact or Fiction?
  • How SOX Changed Our QA Process
  • What Programmer Profiling Taught Me About Choosing the Best Tests
  • Jing, A Favorite Test Tool
  • How to Increase Your Focus During Testing
  • Using Automation to Generate Large Amounts of Data
  • Why It’s Better to Have More Short Test Scripts Than Fewer Long Test Scripts
  • How Usability Testing is Making a Comeback with the Surge of Human-Computer Interaction Engineering
  • Smoke Testing vs. Sanity Testing
Mine was the Programmer Profiling topic. A respected colleague told me he didn’t care for it. Maybe I’ll blog about it later.

As I struggle to find a tool that is current enough to automate tests for a Silverlight 4.0 app, my former QA manager and thoughtful testing buddy, keeps spouting the same message to me. Alex Kell keeps saying my first mistake was to let the programmers pick SL4 in the first place. We have tools that can automate SL2 and even SL3. Alex says, why not convince the programmers to use the older versions of SL.

His point being, if testing is really so important, then why shouldn’t we select our product’s coding technologies based on the ease of testing them? It’s that thing we always hear about called, “testability”.

When the initial SL4 decision was made, I sheepishly took Alex’s advice. I asked the lead architect if he could use SL3 instead. I explained how easy it would be to convert my existing automated test framework to drive SL3. He paused for two seconds, then said “no”, the SL4 bindings were superior and more favorable for our project. Can you blame him? I can’t. Of course, any team starting a greenfield project would want to begin with the latest supported code version.

So now I'm getting busy rewriting my framework using Microsoft’s CodedUI platform, which has SL4 support and is used elsewhere on my team.

In the meantime I’m haunted by Alex’s advice and wonder how hard I should have pushed. On the other hand, there is merit in adaptability. To be able to support your team with the best testing possible, based on their design choices…

…well that is something too.


What do you think?

Can you test something too much? Sure.

If you spend all your time testing FeatureA and four other Features go untested, you’ve probably tested FeatureA too much.

Part of what makes some testers better than others is their ability to know how much testing effort to put into each Feature and when. This is a tricky decision and may require the tester to stop testing FeatureA, even though FeatureA is still yielding bugs. What?!! Think about it. If you know FeatureA is yielding bugs but you don’t know anything about FeatureB, where should you spend your time next? I say, FeatureB. It just feels wrong, doesn’t it?

A lot of testers have a hard time with this. They get wrapped up in completing exhaustive testing on one Feature while the unknowns sit and collect dust. They follow their nose for the reward of bug discovery in comfortable areas. They underestimate how much time it will take to test the unknowns and before they know it, opps, not enough time to test everything!

Have you ever seen one of those plate spinning circus acts? You know the ones, the guy is frantically running around trying to keep all the plates spinning on sticks so they don’t fall. As a tester, you should be like a plate spinner. The Features are the plates. I don’t know what the sticks are, don’t worry about it. But if your programmers are working on Features (e.g., fixing bugs, finding bugs, refactoring, testing), your plates are spinning.

Keep your plates spinning!

At Monday’s retrospective, my answer to the question, “What should we do differently”, was to "have more fun". We have been cranking out releases for 85 iterations and the new year seemed like a good time to try something fresh. One of my testers came up with Bug Bucks.

Although it violates several agile practices, we’re going to give it a try. Here’s how it works:
  • Each programmer earns Buck Bucks equal to the magnitude of Features they code (e.g., a Feature with a magnitude (AKA complexity) of 13 would earn the programmer 13 Bug Bucks).
  • Any tester, BA, or other programmer who finds a bug in said Feature, gets to take away a Bug Buck from the programmer who coded it. The programmer only loses the Bug Buck if the team decides to fix it.
  • Bugs found in production subtract a Bug Buck from each team member (all testers, BAs, and programmers).
  • We have various denominations of Bug Bucks to make change, and team members will pin them to their cubes as they acquire them.
So what can one do with a Bug Buck? We’re still working on that but pending budget approval, we will let them be exchanged for gifts, or more likely, one minute of personal time off, which should work out to a free day off per quarter for ambitious players!

There are several problems we are aware of; such as, will this discourage collaboration? Um…yes.

But will it really? In the big picture? We've already been collaborating on the rules. Anyway, our current approach is not to over-engineer this game, but to try to have some fun. If our velocity changes for the worst, we’ll pull the plug... just have to wait and see.

I'll report back.



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.