I attended Robert Sabourin’s Just-In-Time Testing (JITT) tutorial at CAST2011.
The tutorial centered around the concept of “test ideas”. According to Rob, a test idea is “the essence of the test, but not enough to do the test”. One should get the details when they actually do the test. A test idea should be roughly the size of a Tweet. Rob believes one should begin collecting test ideas “as soon as you can smell a project coming, and don’t stop until the project is live”.
The notion of getting the test details when you do the test makes complete sense to me and I believe this is the approach I use most often. In Rapid Software Testing, we called them “test fragments” instead of “test ideas”. James Bach explains it best, “Scripted (detailed) testing is like playing 20 questions and writing out all the questions in advance.” …stop and think about that for a second. Bach nails it for me every time!
We discussed test idea sources (e.g., state models, requirements, failure modes, mind maps, soap operas, data flow). These sources will leave you with loads of test ideas, certainly more than you will have time to execute. Thus, it’s important to agree on a definition of quality, and use that definition to prioritize your test ideas.
As a group, we voted on three definitions of quality:
- “…Conformance to requirements” – businessman and author, Phil B. Crosby
- “Quality is fitness for use” - 20th century management consultant, Joseph Juran
- “Quality is value to some person” - computer scientist, author and teacher, Gerald M. Weinberg
The winning definition was #2, which also become my favorite, switching from my previous favorite, #3. #2 is easier to understand and a bit more specific.
In JITT, the tester should periodically do this:
- Adapt to change.
- Prioritize tests.
- Track progress.
And with each build, the tester should run what Rob calls Smoke Tests and Fast Tests:
- Smoke Tests – The purpose is build integrity. Whether the test passes or fails is less important than if the outcome is consistent. For example, if TestA failed in dev, it may be okay for TestA to fail in QA. That is an interesting idea. But, IMO, one must be careful. It’s pretty easy to make 1000 tests that fail in two environments with different bits.
- Fast Tests – Functional, shallow but broad.
Most of the afternoon was devoted to group exercises in which we were to develop test ideas from the perspective of various functional groups (e.g., stakeholders, programmers, potential users). We used Rob’s colored index card technique to collect the test ideas. For example: red cards are for failure mode tests, green are confirmatory, yellow for “ility” like security and usability, blue was for usage scenarios, etc.
Our tests revolved around a fictitious chocolate wrapping machine and we were provided with a sort of Mind Map spec describing the Wrap-O-Matic’s capabilities.
After the test idea collection brainstorming within each group, we prioritized other groups’ test ideas. The point here was to show us how different test priorities can depending on who you ask. Thus, as testers, speaking to stakeholders and other groups is crucial for test prioritization.
At first, I considered using the colored index card approach on my own projects, but after seeing Rob’s walkthrough of an actual project he used them for, I changed my mind. Rob showed a spreadsheet he created, where he re-wrote all his index card test ideas so he could sort, filter, and prioritize them. He assigned unique IDs to each and several other attributes. Call me crazy, but why not put them in the spreadsheet to begin with…or some other modern computer software program design to help organize.
Overall, the tutorial was a great experience and Rob is always a blast to learn from. His funny videos and outbursts of enthusiasm always hold my attention. His material and ideas are usually practical and generic enough to apply to most test situations.