In response to my What I Love About Kanban As A Tester #1 post, Anonymous stated:

“The whole purpose of documenting test cases…[is]…to be able to run [them] by testers who don’t have required knowledge of the functionality.”

Yeah, that’s what most of my prior test managers told me, too…

“if a new tester has to take over your testing responsibilities, they’ll need test cases”

I wouldn’t be surprised if a secret QA manager handbook went out to all QA managers, stating the above as the paramount purpose of test cases.  It was only recently that I came to understand how wrong all those managers were.

Before I go on, let me clarify what I mean by “test cases”.  When I say “test cases”, I’m talking about something with steps, like this:

  1. Drag ItemA from the catalog screen to the new order screen.
  2. Change the item quantity to “3” on the new order screen.
  3. Click the “Submit Order” button.

Here’s where I go on: 

  • When test cases sit around, they get stale.  Everything changes…except your test cases.  Giving these to n00bs is likely to result in false fails (and maybe even rejected bug reports).
  • When test cases are blindly followed, we miss the house burning down right next to the house that just passed our inspection.
  • When test cases are followed, we are only doing confirmatory testing.  Even negative (AKA “unhappy”) paths are confirmatory testing.  If that’s all we can do, we are one step closer to shutting down our careers as testers.
  • Testing is waaaay more than following steps.  To channel Bolton, a test is something that goes on in your brain.  Testing is more than answering the question, “pass or fail?”.  Testing is sometimes answering the question, “Is there a problem here?”. 
  • If our project mandates that testers follow test cases, for Pete’s sake, let the n00b’s write their own test cases.  It may force them to learn the domain.
  • Along with test cases comes administrative work.  Perhaps time is better spent testing.
  • If the goal is valuable testing from the n00b, wouldn’t that best be achieved by the lead tester coaching the n00b? And if that lead tester didn’t have to write test cases for a hypothetical n00b, wouldn’t that lead tester have more time to coach the hypothetical n00b, should she appear.  Here’s a secret: she never will appear.  You will have a stack of test cases that nobody cares about; not even your manager.

In my next post I’ll tell you when test cases might be a good idea.

...regression testing is optional.  What?  The horror!!!

Back in the dark ages, with Scrum, we were spending about 4 days of each iteration regression testing.  This product has lots of business logic in the UI, lots of drag-and-drop-type functionality, and very dynamic data, so it has never been a good candidate for automation.  Our regression test approach was to throw a bunch of humans at it (see my Group Regression Testing and Chocolate post).  With Scrum, each prod deployment was a full build, including about 14 modules, because lots of stuff got touched.  Thus, we always did a full regression test, touching all the modules.  Even after an exhaustive regression test, we generally had one or two “escapes” (i.e., bugs that escaped into production).

Now, ask me how often regression tests failed?  …not very often.  And, IMO, that is a lot of waste.

With Kanban, each prod release only touches one small part of the product.  So we are no longer doing full builds.  Think of it like doing a production patch.  We’ve gotten away from full regression tests because, with each release, we are only changing one tiny part of the product.  It is far less risky.  Why test the hell out of bits that didn’t change? 

So now we regression test based on one risk: the feature going to prod.  Sometimes it means an hour of regression tests.  Sometimes it means no regression tests.  So far, it’s a net loss of time spent on regression tests.  And this is a good thing.

We switched to Kanban in February.  So far, not a single escape has made it to prod (yes, I’m knocking on wood).

This success may just be a coincidence.  Or…maybe it’s easier for teams to prevent escaped bugs when those teams can focus on one Feature at a time.   Hmmmmmm…

For those of you writing automated checks and giving scrum reports, status reports, test reports, or some other form of communication to your team, please watch your language…and I'm not talking about swearing.

You may not want to say, “I found a bunch of issues”, because sometimes when you say that, what you really mean is, “I found a bunch of issues in my automated check code” or “I found a bunch of issues in our product code”.  Please be specific.  There is a big difference and we may be assuming the wrong thing.

If you often do checking by writing automated checks, you may not want to say, “I’m working on FeatureA”, because what you really mean is “I’m writing the automated checks for FeatureA and I haven't executed them or learned anything about how FeatureA works yet” or “I’m testing FeatureA with the help of automated checks and so far I have discovered the following…”

The goal of writing automated checks is to interrogate the system under test (SUT), right?  The goal is not just to have a bunch of automated checks.  See the difference?

Although your team may be interested in your progress creating the automated checks, they are probably more interested in what the automated checks have helped you discover about the SUT.

It’s the testing, stupid.  That’s why we hired you instead of another programmer.

...no testing deadlines…the freedom to test as long as I want.

Back in the dark ages, with Scrum, all the sprint Features had to be tested by the end of the iteration.  Since programming generally continued until that last minute (we couldn’t help ourselves), testers were sometimes forced to cut corners.  Even in cases where the whole team (e.g., programmers, BAs) jumped in to help test, there was still a tendency to skimp on testing that would otherwise be performed.  The team wants to be successful.  Success is more easily measured by delivered Features than Feature quality.  That’s the downside of deadlines.

With Kanban, there are no deadlines…you heard me!  Testers take as long as they need.  If the estimates are way off, it doesn’t leave gaps or crunches in iterations.  There are no iterations!  Warning: I fear unskilled testers may actually have a difficult time with this freedom.  Most testers are used to being told how much time they have (i.e., “The Time’s Up! Heuristic”).  So with Kanban, other Stopping Heuristics may become more important.

Jon Bach walked up to the podium and (referring to his readiness as the presenter) asked us how to tell the difference between a tester and a programmer:  A programmer would say, “I’m not ready for you guys yet”.

STPCon Spring 2012 kicked off with the best keynote I have seen yet.  Jon took on the recent Test-Is-Dead movement using a Journalism-Is-Dead metaphor.

He opened with the observation, “Did anyone get a ‘USA Today’ delivered to their room this morning?”

“No”. (something as a tester I was embarrassed not to have noticed.)

And after a safety language exercise, Jon presented a fresh testing definition, which reflects his previous career, journalism:

Testing is an interrogation and investigation in pursuit of information to aid evaluation.

Jon wondered out loud what had motivated the Test-Is-Dead folks.  “Maybe there is a lot of bad testing in our midst.”  And he proceeded to examine about 7 threats (I think there were more) that he believed could actually make testing dead.  Each testing threat was reinforced with its metaphorical journalism threat and coupled with a quote from the Test-Is-Dead folks. 

(I listed each threat-to-testing in bold text, followed by its journalism threat.  I listed an example Test-Is-Dead quote for threat #3 below.) 

  1. (threat to testing) If the value of testing become irrelevant – (threat to journalism) If we stop caring about hearing the news of what is happening in the world. (implied: then testing and journalism is dead)
  2. If the quality of testing is so poor that it suffers an irreversible “reputation collapse event”.  If “journalist” comes to mean “anybody who writes” (e.g., blogs, tweets, etc.).
  3. If all users become early adaptors with excellent technical abilities.  If everyone becomes omnipotent; they already know today’s weather and tomorrow’s economic news.

    For this threat, the Test-Is-Dead quote was from James Whittaker, “You are a tester pretending to be a user”.  The context of Whittaker’s statement was that testers may not be as important because they are only pretending to be users, while modern technology may allow actual users to perform the testing.  Bach’s counterpoint was: since not all users may want to be testers and not all users may possess the skills to test, there may still be value for a tester role.
  4. If testers are forced to channel all thoughts and intelligence through a limited set of tools and forced to only test what can be written as “executable specifications”.  If journalists could only report what the state allows.  Jon listed the example of the Egyptian news anchor that just resigned from state media after 20 years, due to what she called “lack of ethical standards” in the media’s coverage of the Arab Spring.
  5. If all the tests testers could think of were confirmatory.  If all the journalists did not dig deeper (e.g., If they always assumed the story was just a car crash.)
  6. If software stops changing and there is no need to ask new questions.  If the decisions people made today no longer depend on the state of the economy, weather, who they want to elect, etc.
  7. If the craft of testing is made to be uninviting; into a boring clerical activity that smart, talented, motivated, creative people are not interested in.  If you had to file a “news release approval” form or go through the news czar for all the news stories you told.

Jon’s talk had some other highlights for me:

  • He shared a list of tests he performed on eBay’s site prior to his eBay interview (e.g., can I find the most expensive item for sale?).  Apparently, he reported his test results during the interview.  This is an awesome idea.  If anyone did that to me, I would surely hire them.
  • He also showed a list of keynote presentation requirements he received from STPCon.  He explained how these requirements (e.g., try to use humor in your presentation) were like tests.  Then he used the same metaphor to contrast those “tests” with “checks”; am I in the right room?  Is the microphone on?  Do I have a glass of water?

Jon concluded where he started.  He revealed that although newspapers may be dead, journalism is not.  Those journalists are just reporting the news differently.  And maybe it’s time to cut those unskilled testers loose as well.  But, according to Jon, the testing need for exploration and sapience in a rapid development world is more important than ever.

Hey conference haters.  Maybe it’s you…

I just got back from another awesome testing conference, Spring STPCon 2012 in New Orleans.  Apparently not all attendees shared my positive experience.  Between track sessions I heard the usual gripes:

 

“It’s not technical enough!”

“I expected the presenter to teach me how to install a tool and start writing tests with it.”

“It was just another Agile hippy love fest.”

“He just came up with fancy words to describe something I already do.”

 

I used to whine with the best of them.  Used to.  But now I have a blast and return full of ideas and inspiration.  Here are my suggestions on how to attend a testing conference and get the most out of it:

  • Look for ideas, not instructions.  Adjust your expectations.  You are not going to learn how to script in Ruby.  That is something you can learn on your own.  Instead, you are going to learn how one tester used Ruby to write automated and manual API-layer REST service checks.
  • Follow the presenters.  Long before the conference, select the track sessions you are interested in.  Find each presenter’s testing blog and/or Twitter name and follow them for several weeks.  Compare them and discard the duds.
  • Talk to the presenters.  At the conference, use your test observation skills to identify presenters.  Introduce yourself and ask questions related to your project back at the office.  If you did my second bulleted suggestion above, you now have an ice-breaker, “Hey, I read your blog post about crowd source testing, I’m not sure I agree…”.
  • Attend the non-track-session stuff too.  I think track sessions are the least interesting part of conferences.  The most interesting, entertaining, and easily digestible parts are the Lightning Talks, Speed Geeking, Breakfast Bytes, meal discussion tables, tester games, and keynotes.  Don’t miss these.
  • Take notes.  Take waaaaaay more notes than you think you need.  I bring a little book and write non-stop during presentations.  It keeps me awake and engaged.  I can flip through said book on the plane, even when forced to turn off all personal electronics.
  • Log Ideas.  Sometimes ideas are directly given during presentations.  But mostly, they come to you while applying information from presentations to your own situation.  I write the word “IDEA” in my book, followed by the idea.  Sometime these ideas have nothing to do with the presentation context.
  • Don’t flee the scene.  When the conference ends each day, stick around.  You’ll generally find the big thinkers, still talking about testing in an informal hallway discussion.  I am uncomfortable in social situations and always feel awkward/intimidated by these folks but they are generally thrilled to bend your ear.
  • Mix and mingle.  Again, I find parties and social situations extremely scary.  Despite that fear, I almost always make it a point to eat my conference meal with a group of people I’ve never seen before.  It always starts awkward but it ends with some new friends, business cards, and the realization that other testers are just as unsophisticated as I am.
  • Submit a presentation.  If you hated one or more track sessions, channel that hate into your own presentation.  Take all the things you hated and do the opposite.  I did.  I got sick of always seeing consultants, vendors, and people who work for big fancy software companies.  So I pitched the opposite.  The real trick here is if you get accepted, the conference is free.  Let’s see your boss turn that one down.
  • Play tester games or challenges.  If James Bach, Michael Bolton, or any of the other popular context-driven approach testers are attending the conference, tell them you are interested in playing tester games.  They are usually happy to coach you on testing skills in a fun way.  It may be a refreshing break from track sessions.
  • Write a thank you card to your boss.  Don’t send an email.  Send something distinctive.  Let them know how much you appreciate their training money.  Tell them a few things you learned.  Tell them about my next bullet.
  • Share something with your team.  The prospect of sharing your conference takeaways with your team will keep you motivated to learn during the conference and help you put those ideas to use.

What do you do to get the most out of your testing conference experiences?



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.