Sometimes production bug scenarios, are difficult to recreate in a test environment.

One such bug was discovered on one of my projects:

If ItemA is added to a database table after SSIS Package1 executes but before SSIS Package2 executes, an error occurs.  Said packages execute at random intervals frequently, to the point where a human cannot determine the exact time to add ItemA, if that human is trying to reproduce the bug.  Are you with me?

So what is a tester to do?

The answer is, control your test environment.  Disable the packages and manually execute them to run one time, when you want them to. 

  1. Execute SSIS Package1 once.
  2. Add ItemA to the database table.
  3. Execute SSIS Package2 once.

A tester on my team argued, “But that’s not realistic.”.  She’s right.  But if we understand the bug as much as we think we do, we should be able to repeatedly experience the bug and its fix, using our controlled environment.  And if we can’t, then we really don’t understand the bug.

This is what it’s all about.  Be creative as a tester, simplify things, and control your environment.

Which of these scenarios will make you a rock star tester?  Which will make your job more interesting?  Which provides the most flexible way for your team to handle turbulence?

 

SCENARIO 1

Programmer: We need to refactor something this iteration.  It was an oversight and we didn’t think we would have to.

Tester: Can’t this wait until next iteration?  If it ain’t broke, don’t fix it.

BA: The users really can’t wait until next iteration for FeatureA. I would like to add FeatureA to the current iteration.

Tester: Okay, which feature would you like to swap it out for?

Programmer: I won’t finish coding this until the last day of the iteration.

Tester: Then we’ll have to move it to a future iteration, I’m not going to have time to test it. 

 

SCENARIO 2

Programmer: We need to refactor something this iteration. It was an oversight and we didn’t think we would have to.

Tester: Yes, I can test it. I’ll need your help, though.

BA: The users really can’t wait until next iteration for FeatureA. I would like to add FeatureA to the current iteration.

Tester: Yes, I can test it. However, these are the only tests I’ll have time to do.

Programmer: I won’t finish coding this until the last day of the iteration.

Tester: Yes, I can test it…as long as we’re okay releasing it with these risks.

Yesterday, a tester on my team gave an excellent presentation for our company’s monthly tester community talk.  She talked about what she had learned about effective tester conversation skills in her 15 years as a tester.  Here are some of my favorite take-aways:

  • When sending an email or starting a verbal conversation to raise an issue, explain what you have done.  Prove that you have actually tried concrete things.  Explain what concrete things you have done. People appreciate knowing the effort you put into the test and sometimes spot problems.
  • Replace pronouns with proper names.  Even if the conversation thread’s focus is the Save button, don’t say, “when I click on it”, say “when I click on the Save button.”
  • Before logging a bug, give your team the benefit of the doubt.  Explain what you observe and see what they say.  She said 50% of the time, things she initially suspects as bugs, are not bugs.  For example: the BA may have discussed it with the developer and not communicated it back to the team yet.
  • Asking questions rocks.  You can do it one-on-one or in team meetings.  One advantage of doing it in meetings is to spark other people’s brains.
  • It’s okay to say “I don’t understand” in meetings.  But if, after asking three times, you appear to be the only one in the meeting not understanding, stop asking!  Save it for a one-on-one discussion so you don’t develop a reputation of wasting people’s time.
  • Don’t speak in generalities.  Be precise.  Example:
    • Don’t say, “nothing works”.
    • Instead, pick one or two important things that don’t work, “the invoice totals appear incorrect and the app does not seem to close without using the Task Manager”.
  • Know your team.  If certain programmers have a rock solid reputation and don’t like being challenged, take some extra time to make sure you are right. Don’t waste their time.  It hurts your credibility.

She had some beautiful exercises she took us through to reinforce the above points and others.  My favorite was taking an email from a tester, and going through it piece-by-piece to improve it.

I attended Robert Sabourin’s Just-In-Time Testing (JITT) tutorial at CAST2011. 

The tutorial centered around the concept of “test ideas”.  According to Rob, a test idea is “the essence of the test, but not enough to do the test”.  One should get the details when they actually do the test.  A test idea should be roughly the size of a Tweet.  Rob believes one should begin collecting test ideas “as soon as you can smell a project coming, and don’t stop until the project is live”.

The notion of getting the test details when you do the test makes complete sense to me and I believe this is the approach I use most often.  In Rapid Software Testing, we called them “test fragments” instead of “test ideas”.  James Bach explains it best, “Scripted (detailed) testing is like playing 20 questions and writing out all the questions in advance.” …stop and think about that for a second.  Bach nails it for me every time!

We discussed test idea sources (e.g., state models, requirements, failure modes, mind maps, soap operas, data flow).  These sources will leave you with loads of test ideas, certainly more than you will have time to execute.  Thus, it’s important to agree on a definition of quality, and use that definition to prioritize your test ideas.

As a group, we voted on three definitions of quality:

  1. “…Conformance to requirements” – businessman and author, Phil B. Crosby
  2. “Quality is fitness for use” - 20th century management consultant, Joseph Juran
  3. “Quality is value to some person” - computer scientist, author and teacher, Gerald M. Weinberg

The winning definition was #2, which also become my favorite, switching from my previous favorite, #3.  #2 is easier to understand and a bit more specific.

In JITT, the tester should periodically do this:

  1. Adapt to change.
  2. Prioritize tests.
  3. Track progress.

And with each build, the tester should run what Rob calls Smoke Tests and Fast Tests:

  • Smoke Tests – The purpose is build integrity.  Whether the test passes or fails is less important than if the outcome is consistent.  For example, if TestA failed in dev, it may be okay for TestA to fail in QA.  That is an interesting idea.  But, IMO, one must be careful.  It’s pretty easy to make 1000 tests that fail in two environments with different bits.
  • Fast Tests – Functional, shallow but broad.

Most of the afternoon was devoted to group exercises in which we were to develop test ideas from the perspective of various functional groups (e.g., stakeholders, programmers, potential users).  We used Rob’s colored index card technique to collect the test ideas.  For example: red cards are for failure mode tests, green are confirmatory, yellow for “ility” like security and usability, blue was for usage scenarios, etc.

Our tests revolved around a fictitious chocolate wrapping machine and we were provided with a sort of Mind Map spec describing the Wrap-O-Matic’s capabilities.

After the test idea collection brainstorming within each group, we prioritized other groups’ test ideas.  The point here was to show us how different test priorities can depending on who you ask.  Thus, as testers, speaking to stakeholders and other groups is crucial for test prioritization.

At first, I considered using the colored index card approach on my own projects, but after seeing Rob’s walkthrough of an actual project he used them for, I changed my mind.  Rob showed a spreadsheet he created, where he re-wrote all his index card test ideas so he could sort, filter, and prioritize them.  He assigned unique IDs to each and several other attributes.  Call me crazy, but why not put them in the spreadsheet to begin with…or some other modern computer software program design to help organize.

Overall, the tutorial was a great experience and Rob is always a blast to learn from.  His funny videos and outbursts of enthusiasm always hold my attention.  His material and ideas are usually practical and generic enough to apply to most test situations.

Thanks, Rob!

After completing my own CAST2011 Emerging Topics track presentation, “Tester Fatigue and How to Combat It”, I stuck around for several other great emerging topics and later returned for the Lightning Talks.

My stand out favorites were:

  • Who Will Test The Robots? – This brief talk lasted a minute or two but I really loved the Q&A and have been thinking about it since.  The speaker was referred to as T. Chris (and well-known by other testers).  One observer answered by suggesting that we’ll need to build robots to test the robots.  I realized this question will need to be answered sooner than we think.  Especially after the dismal reliability of my Roomba.
  • Improv Comedy / Testing Songs – This was my first encounter with the brilliant and creative, Geordie Keitt.  He changes the lyrics of classic pop songs to be about testing.  You can hear a sampling of his work on his blog.  Sandwiched between his musical live performances were several audience participation testing comedy improvs with Geordie, Lanette Creamer, and Michael Bolton.  It was bizarre seeing Bolton quickly turn an extension cord into a cheese cutting machine that needed better testing.  At some point during this session, Lanette emerged, dressed as a cat (from head to toe) and began singing Geordie’s tester version of Radiohead’s Creep song.  This version must have been called, “Scope Creep”.  Here is a clip…

    “Features creep. Features will grow. What the hell is this ding here. That don’t belong here…”

  • Stuff To Do When I Get Back From CAST2011 – This Lightning Talk was presented by Liz Marley.  I found it very classy.  This is what Liz said she would do after Cast2011 (these are from my notes so I may have changed them slightly):
    • Send a hand-written thank you note to her boss for sending her to CAST2011.
    • Review her CAST2011 notes while fresh in her mind.
    • Follow up with people she met at CAST2011.
    • Sign up for BBST.
    • Get involved in Weekend Testing.
    • Schedule a presentation in her company, to present what she learned at CAST2011.
    • Plan a peer conference.
    • Watch videos for CAST2011 talks she missed.
    • Make a blog post about what she learned at CAST2011.
    • Invite her manager to come to CAST2012.

      Thanks, Liz!  So far I’ve done five of them.  How about you?
  • Mark Vasco’s Test Idea Wall – Mark showed us pictures of his team’s Test Idea Wall and explained how it works: take a wall, in a public place, and plaster it with images, phrases, or other things that generate test ideas.  Invite your BA’s and Progs to contribute.  Example: A picture of a dog driving a car reminds them not to forget to test their drivers.  This is fun and valuable and I started one with one of my project teams this week.
  • Between Lightning Talks, to fill the dead time, facilitator Paul Holland ripped apart popular testing metrics.  He took real, “in-use”, testing metrics from the audience, wrote them on a flip chart, then explained how ridiculous each was.  Paul pointed out that “bad metrics cause bad behavior”.  Out of some 20 metrics, he concluded that only two were valuable:
    • Expected Coverage vs. Actual Coverage
    • Number of Bugs Found

Lightning Talks rock!



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.