If there were a testing conference that consisted of only lightning talks, I would be the first to sign up.  Maybe I have a short attention span or something.  STARwest’s spin on lightning talks is “Lightning Talk Keynotes” in which (I assume) Lee Copeland handpicks lightning talk presenters.  He did not disappoint.  Here is my summary:

Michael Bolton

Michael started with RST’s formal definition of a Bug: “anything that threatens the value of the product”.  Then he shared his definition of an Issue: “anything that threatens the value of the testing” (e.g., a tool I need, a skill I need).  Finally, Bolton suggested, maybe issues are more important than bugs, because issues give bugs a place to hide.

Hans Buwalda

His main point was, “It’s the test, stupid”.  Hans suggested, when test automation takes place on teams, it’s important to separate the testers from the test automation engineers.  Don’t let the engineers dominate the process because no matter how fancy the programming is, what it tests is still more important.

Lee Copeland

Lee asked his wife why she cuts the ends off her roasts, and lays them against the long side of the roast, before cooking them.  She wasn’t sure because she learned it from her mother.  So they asked her mother why she cuts the ends off her roasts.  Her mother had the same answer so they asked her grandmother.  Her grandmother said, “Oh, that’s because my oven is too narrow to fit the whole roast in it”.

Lee suggested most processes begin with “if…then” statements (e.g., if the software is difficult to update, then we need specs).  But over time, the “if” part fades away.  Finally, Lee half-seriously suggested all processes should have a sunset clause.

Dale Emory

If an expert witness makes a single error, out of an otherwise perfect testimony, it raises doubts in the juror's minds.  If 1 out of 800 automated tests throws a false positive, people accept that.  If it keeps happening, people loose faith in the tests and stop using them.  Dales suggests the following prioritized way of preventing the above:

  1. Remove the test.
  2. Try to fix the test.
  3. If you are sure it works properly, add it back to the suite.

In summary, Dale suggests, reliability of the test suite is more important then coverage.

Julie Gardiner

Julie showed a picture of one of those sliding piece puzzles; the kind with one empty slot so adjacent pieces can slide into it.  She pointed out that this puzzle could not be solved if it weren’t for the empty slot.

Julie suggested slack is essential for improvement, innovation, and morale and that teams may want to stop striving for 100% efficiency.

Julie calls this “the myth of 100% efficiency”.

Note: as a fun gimmicky add-on, she offered to give said puzzle to anyone that went up to her after her lightning talk to discuss it with her.  I got one!

Bob Galen

Sorry, I didn’t take any notes other than “You know you’ve arrived when people are pulling you”.  Either it was so compelling I didn’t have time to take notes, or I missed the take-away.

Dorothy Graham

Per Dorothy, coverage is a measure of some aspect of thoroughness.  100% coverage does not mean running 100% of all the tests we’ve thought of.  Coverage is not a relationship between the tests and themselves.  Instead, it is a relationship between the tests and the product under test.

Dorothy suggests, whenever you hear “coverage”, ask “Of what?”.

Jeff Payne

Jeff began by suggesting, “If you’re a tester and don’t know how to code in 5 years, you’ll be out of a job”.  He said 80% of all tester job posts require coding and this is because we need more automated tests.

Martin Pol

My notes are brief here but I believe Martin was suggesting, in the near future, testers will need to focus more on non-functional tests.  The example Martin gave is the cloud; if the cloud goes down, your services (dependent on the cloud) will be unavailable.  This is an example of an extra dependency that comes with using future technology (i.e., the cloud).

I’m going back through my STARwest notes and I want to blog about a few other sessions I enjoyed.

One of those was Nancy Kelln and Lynn McKee’s, “Test Estimation and the Art of Negotiation”, in which they suggested a new way to answer the popular question…

How long will testing take?

I figured Nancy and Lynn would have some fresh and interesting things to say about test estimation since they hosted the Calgary Perspectives on Software Testing workshop on test estimation this year. I was right.

In this session, they tried to help us get beyond using what one guy in the audience referred to as a SWAG (Silly Wild-Ass Guess).

  1. Nancy and Lynn pointed to the challenge of dependencies. Testers sometimes attempt to deal with dependencies by padding their estimates. This won’t work for Black Swans, which are the unknown unknowns; those events cannot be planned for or referenced.
  2. The second challenge is optimism. We think we can do more than we can. Nancy and Lynn demonstrated an example of the impact of bugs on testing time. As more bugs are discovered, and their fixes need to be verified, more time is taken from new testing, time that is often under estimated.
  3. The third challenge is identifying what testing means to each person. Does it include planning? Reporting? Lynn suggested to try to estimate how much fun one would have at Disneyland. Does the trip start when I leave my house, get to California, enter the park, or get on a ride? When does it end?

Eventually, Lynn and Nancy suggested the best estimate is no estimate at all.

Instead, it is a negotiation with the business (or your manager). When someone asks, “how long will testing take?”, maybe you should explain that testing is not a phase. Testing is exploration, discovery, learning, and reporting. Testing could end when there are no more interesting questions to answer but stopping testing is a business decision.

They further suggested that testers have a responsibility to help the business understand the trade-offs; if quality expectations are high, it may require more testing than if they are lower. Change your test approach to fit the needs. If you only have one day to test, you can still do your best to find the most mission critical information that you can in one day.

Apart from the session content, Lynn and Nancy were awesome presenters. They used volunteers from the audience for role plays, cartoons, brainstorms, and other interactive techniques to keep the session engaging. I heard they proposed a full day session for STPCon Spring. That would be a fun day.

Thanks Nancy and Lynn!

When someone walks up to your desk and asks, “How’s the testing going?”, a good answer depends on remembering to tell that person the right things.

After reading Michael Kelly’s post on coming up with a heuristic (model), I felt inspired to tweak his MCOASTER test reporting heuristic model.  I tried using it but it felt awkward.  I wanted one that was easier for me to remember, with slightly different trigger words, ordered better for my context.  It was fun.  Here is what I did:

  1. I listed all the words that described things I may want to cover in a test report.  Some words were the same as Michael Kelly’s but some were different (e.g., “bugs”).
  2. Then I took the first letter of each of my words and plugged them into Wordsmith.org’s anagram solver.
  3. Finally, I skimmed through the anagram solver results, until I got to those starting with ‘M’ (I knew I wanted ‘Mission’ to be the first trigger).  If my mnemonic didn’t jump out at me, I tweaked the input words slightly using synonyms and repeated step 2.

Two minutes later I settled on MORE BATS.  After being an avid caver for some 13 years, it was perfect.  When someone asks for a test report, I feel slightly lost at first, until I see MORE BATS.  Just like in caving; when deep into the cave I feel lost, but seeing “more bats” is a good sign the entrance is in that general direction, because more bats tend to be near the entrance.

Here’s how it works (as a test report heuristic):

Mission

Obstacles

Risks

Environment

 

Bugs

Audience

Techniques

Status

“My mission is to test the new calendar control.  My only obstacle is finding enough data in the future for some of my scenarios.  The risks I’m testing include long range future entries, multiple date ranges, unknown begin dates, and extremely old dates.  I’m using the Dev environment because the control is not working in QA.  I found two date range bugsSince this is a programmer (audience) asking me for a test report I will describe my techniques; I’m checking my results in database tableA because the UI is not completed.  I think I’ll be done today (status).”

Try to make your own and let me know what you come up with.



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.