Bob Galen’s STPCon session, entitled “Agile Testing within SCRUM”, had an interesting twist I did not expect. After a Scrum primer, Bob suggested that test teams can use a Scrum wrapper around their test activities, regardless of what the dev methodology may be.

In other words, even if you’re testing for one or more non-Scrum dev teams, you may still use Scrum to be a better test team. This is kind of a fun idea because I’ve been chomping at the bit to be part of a Scrum team. The idea is that your QA team hold the daily stand-up meetings, create a sprint backlog list, track sprint progress with a burndown chart, and end each sprint with a review meeting to reflect on sprint success/failure. You can add as many Scrum practices as you find valuable (e.g., invite project stateholders like devs/customers to prioritize sprint backlog items or attend daily meetings).

Wrapping QA practices with Scrum is actually not that difficult. For example, sprint backlog items can be bugs to retest, features to test, or test cases to write. Daily stand-up reports can be “Yesterday I tested 5 features and logged 16 bugs, today I will test these other features, and Bug13346 is blocking me from executing several tests.”



My QA team actually started holding Scrum meetings (see picture) about three months ago and it seems to help us stay more focused each day. What’s lacking is a formal sprint goal and means to track progress towards it. Bob Galen’s little session has convinced me it’s worth a try. At least to tide me over till all my devs implement Scrum!

Many of my notes from Hans Buwalda’s STPCon session are test design tips that can also apply to manual testing. One of my favorite tips was to remember to go beyond requirement-based testing. A good QA Manager should say “I know you have tested these ten requirements, now write me some tests that will break them”.

As testers, we should figure out what everyone else forgot about. These are the good tests. These are where we can shine and provide extra value to the team. One way to do this is to take a simple test and make it more aggressive.

Example Requirement: A user can edit ItemA.

Requirement-based Test: UserA opens ItemA in edit mode.

How can I make this test more aggressive? Let’s see what happens if:

  • UserA and UserB both open ItemA in edit mode at the same time.
  • UserA opens ItemA in edit mode when UserA already has ItemA in edit mode.
  • UserA opens ItemA in edit mode, makes changes, goes home for the weekend, then attempts to save changes to ItemA on Monday.
  • UserA opens ItemA in edit mode, loses network connectivity, then attempts to save ItemA.

What else can you think of?

Here are 10 things I heard Hans Buwalda say about test automation. I have thought about each of these to some extent and I would love to discuss any that you disagree with or embrace.

  1. Stay away from “illegal checks”. Do not check something just because an automated test is there. Stay within the scope of the test, which should have been defined by the test designer. There should be a different test for each thing to check.
  2. If bugs found by tests will not be fixed soon, do not keep executing those tests.
  3. All Actions (AKA Keywords) with parameters should have defaults so the parameters do not have to be specified. This makes it easier for the test author to focus on the target.
  4. Group automated tests into modules (i.e., chunks of tests that target a specific area). These tests should not be dependent on other modules.
  5. Do not use copy and paste inside your automation code. Instead, be modular. Instead of copying low-level steps to another test, use a procedure that encapsulates those low-level tests. This prevents a maintenance nightmare when the AUT changes.
  6. Remove all hard-coded wait times. Instead use active timing. Never tell a test to wait 2 seconds before moving on. If it takes 3 seconds your test breaks. Instead, test for the ready state using a loop.
  7. Ask your devs to populate a specific object property (e.g., “accessibility name”) for you. If not, you will waste time determining how to map to each object.
  8. Attempt to isolate UI tests such that one failed test will not fail all the other tests.
  9. Something I didn’t expect Hans to say was not to worry about error handling in your automation framework. He says not to waste time on error handling because the tests should be written to “work”. At first I disagreed with this. But later I realized, in my own experiences with error handling, that it made me lazy. Often, instead of automating a solid test, I relied on error handling to keep my tests passing.
  10. When Hans recommends an “Action-Based” test automation framework, IMO what he means is that it should support both low-level and high-level descriptions for the test steps. Hans considers “Keyword-Driven” automation to be low-level; the keywords being things like “click button”, “type text”, “select item”. Hans also considers Business-Template-Driven automation to be high-level; things like “submit order”, “edit order”. Action-Based test automation uses all of the above. One reason is to build a test library that can check low-level stuff first. If the low level stuff passes, then the high-level tests should execute.

What do you think about these?



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.