After seeing Mark Vasko’s CAST 2011 lightning talk, I was inspired to create a Test Idea Wall with one of my project teams. Much to my surprise, the damn thing actually works.
When I’m taking a break from testing something, I pause as I walk past the Test Idea Wall. My brain jumps around between the pictures and discovers gaps in my test coverage.
Our wall is incredibly simple, but so far it contains the main test idea triggers we forget. For example, the picture of the pad lock reminds us to consider locking scenarios, something that is often just an afterthought, but always gets us fruitful information:
- What if we run the same tests as a read-only user?
- What if we run the same tests while another user has our lock?
- What if we run the same tests while the system has our lock?
- What if certain users should not have this permission?
My tech department had a little internal one day idea conference this week. I wanted to keep it from being a programmer fest so I pitched Crowdsource Testing as a topic. Although I knew nothing about it, I had just seen James Whittaker’s STARwest Keynote, and the notion of having non-testers test my product had been bouncing around my brain.
With a buzzword like “crowdsource” I guess I shouldn’t be surprised they picked it. That and maybe I was one of their only submissions.
Due to several personal conflicts I had little time to research, but I did spend about 12 hours alone with my brain in a long car ride. During that time I came up with a few ideas, or at least built on other people’s.
- Would you hang this on your wall?
…only a handful of people raised their hands. But this is a Jackson Pollock! Worth millions! It’s quality art! Maybe software has something in common with art.
- Two testing challenges that can probably not be solved by skilled in-house testers are:
- Quality is subjective. (I tip my hat to Michael Bolton and Jerry Weinberg)
- There are an infinite amount of tests to execute. (I tip my hat to James Bach and Robert Sabourin)
- There is a difference between acting like a user and using software. (I tip my hat to James Whittaker). The only way to truly measure software quality is to use the software, as a user. How else can you hit just the right tests?
- Walking through the various test levels: Skilled testers act. Team testers act (this is when non-testers on your product team help you test). User acceptance testing is acting (they are still pretending to do some work). Dogfooders…ah yes! Dogfooders are finally not acting. This is the first level to cross the boundary between pretending and acting. Why not stop here? It’s still weak in the infinite tests area.
- The next level is crowdsource testing. Or is it crowdsourced testing? I see both used frequently. I can think of three ways to implement crowdsource testing:
- Use a crowdsource testing company.
- Do a limited beta release.
- Do a public beta release.
- Is crowdsource testing acting or using? If you outsource your crowdsource testing to a company (e.g., uTest, Mob4hire, TopCoder), now you’re hiring actors again. However, if you do a limited or public beta release, you’re asking people to use your product. See the difference?
- Beta…is a magic word. It’s like stamping “draft” on the report you’re about to give your boss. It also says, this product is alive! New! Exciting! Maybe cutting edge. It’s fixable still. We won’t hang you out to dry. It can only get better!
- Who should facilitate crowdsource testing? The skilled in-house tester. Maybe crowdsource testing is just another tool in their toolbox. If they spent this much time executing their own tests…
…maybe instead, they could spend this much time (blue) executing their own tests, and this much time (yellow) collecting test information from their crowd…
- How do we get feedback from the crowd? (from least cool to coolest) The crowd calls our tech support team, virtual feedback meetings, we go find the feedback on twitter etc., the crowd provides feedback via tightly integrated feedback options on the product’s UI, we programmatically collect it in a stealthy way.
- How do we enlist the crowd? Via creative spin and marketing (e.g., follow Hipster’s example).
- Which products do we crowdsource test? NOT the mission critical ones. We crowdsource the trivial products like entertainment or social products.
- How far can we take this? Far. We can ask the crowd to build the product too.
- What can we do now?
- BA’s: determine beta candidate features.
- Testers: spend less time testing trivial features. Outsource those to the crowd.
- Programmers: Decouple trivial features from critical features.
- UX Designers: Help users find beta features and provide feedback.
- Managers: Encourage more features to get released earlier.
- Executives: Hire a crowdsource test czar.
There it is in a nutshell. Of course, the live presentation includes a bunch of fun extra stuff like the 1978 Alpo commercial that may have inspired the term “dogfooding”, as well as other theories and silliness.