I did a Force Field Analysis brainstorm session with my team.  We wrote down what we like and don’t like about our jobs as testers.  Then, in the “didn’t like” column, we circled the items we felt we had control over changing.  Here are what my testers and I don’t like about our jobs.

Items We May Not Have Control Over:

  • When asked what needs to be tested as a result of a  five minute code change,  programmers often say “test everything”.
  • Stressful deadlines.
  • Working extra hours.
  • Test time is not adequately considered by Programmers/BAs when determining iteration work.  Velocity does not appear to matter to team.

Items We May Have Control Over:

  • Testers don’t have a way to show the team what they are working on.  Our project task boards have a column/status for “Open”, “In Development”, “Developed”, and “Tested”.  It’s pretty easy to look under the “In Development” column to see what programmers are working on.  But after that, stuff tends to bunch up under the “Developed” column.  Even though their may be 10 Features/Stories in “Developed”, the testers may only be working on two of them.  A side affect is testers having to constantly answer the question “What are you working on?”...my testers hate that question.
  • Testers don’t know each other’s test skills or subject matter expertise.  We have some 20 project teams in my department.  Most of the products interact.  Some are more technical to test than others.  Let’s say you’re testing ProductA and you need help understanding its interface with ProductB.  Which tester is your oracle?  Let’s say you are testing web services for the first time and you’re not sure how to do this.  Which tester is really good at testing web services and can help you get started?
  • Testers lose momentum when asked to change priorities or switch testing tasks. A programming manager once told me, “each time you interrupt a programmer it takes them 20 minutes to catch back up”.  Testers experience the same interruption productivity loss, but arguably, to a larger degree.  It is annoying to execute tests in half-baked environments, while following new bugs that may or may not be related, along the way.

We will have a follow-up brainstorm on ways to deal with the above.  I’ll post the results.

5 comments:

  1. Shey said...

    I am surprised at the first point of your "Items We May Not Have Control Over". This should be a matter of communicating to the programmers that the team requires more specific direction? Maybe pointing out that "test everything" means none of the programmers other fixes will be tested for xyz amount of time.

  2. Shaun Hershey said...

    You definitely have control over the "Testers don't have a way to show the team what they are working on" point (assuming you can convince the right people that the change is for the better).

    My team was in a similar situation but when we finally realized that it just wasn't working, we decided to shift around our columns. Our final board ended up similar to what you have in place now, but with a "Testing in Progress" column added between "Developed" and "Tested." Probably not the most original solution, but it worked for us. That way, any time an item went into testing, it got moved to its own column and if it failed, we'd throw it back into "In Development" with an asterisk next to it so that it would be prioritized.

  3. richa said...

    Nice post.
    Like to add one more point where testers ask developer about the new feature added in product, and they are not able to explain it properly.
    Well I like your last point the most.

  4. Steve said...

    I like the idea of this brainstorm. Maybe I'll try it out at my workplace. Here is my suggestion to: Tester's don't know each other's skills. Do you have a community of practice? I find this the best way to stay in touch within an ever evolving workplace. We tried to keep lists, but most of that gets out of date quickly. Keep writing, Good stuff!

  5. DiscoveredTester said...

    "Testers lose momentum when asked to change priorities or switch testing tasks. A programming manager once told me, “each time you interrupt a programmer it takes them 20 minutes to catch back up”. Testers experience the same interruption productivity loss, but arguably, to a larger degree. It is annoying to execute tests in half-baked environments, while following new bugs that may or may not be related, along the way."

    I can so relate to that statement. My horror story involved changing tasks about 7 times in the space of a couple of hours. Now that might not have been so bad if what I was working on was tiny by comparison, but each of them were very large tasks. It is understandable to be a little frustrated when this happens, but we still need act like professionals, even if we don't understand the reasons for the stop and go traffic that passes through our testing tasks through the day.



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.