Hans spent most of his time discussing test design rather than how to actually “do” the automation. After my experiences with test automation, I completely agree with Hans. The test design is the hardest part, and there is something about automation that magnifies poor test design.
Manual testing allows for dynamic test design, automated testing does not. A manual tester can read an AUT’s modified validation message and determine if it will make sense to a user. An automated test cannot.
Per Hans, when thinking about automated test design, the first two questions should be:
1.) What am I checking?
2.) How will I check it?
These seemingly simple questions are often more difficult than automating the test itself. And that may be why these questions are often neglected. It is more fun to automate for the sake of automation than for the sake of making valuable tests.
To eliminate this problem, Hans counters that Test Automation Engineers should never see a single test case. And Test Designers should never see a single piece of automation code. Hmmm…I’m still not sure how I feel about this. Mostly, because I want to do both!
In the next post, I’ll share some of Mr. Buwalda’s test design ideas that apply to manual and automated testing.
I attended Robert Sabourin’s “To Infinity And Beyond” class at STPCon. This guy’s sessions are fun because of his sense of humor and pop culture references. He played us a screen captured video of someone sending an email and we were asked to write down as many boundaries as we could think of within the video. I had nearly 40 in five minutes but many testers got more.
The point was to start a conversation about boundary tests that go beyond the obvious. The first take-away I embraced was a simple mnemonic Robert gave us for thinking about boundaries. I usually hate mnemonics because most of them are only useful as PowerPoint slide filler. However, I’ve actually started using this one:
A – Application logic
I – Inputs
M – Memory storage
These are three ways to think about boundaries for test ideas. Most features you attempt to test will have these types of boundaries. The other cool thing about this mnemonic is it prioritizes your tests (i.e., test the application logic first, if you have it).
The typical example is testing a text box. But let's apply it to something different. If we apply Robert’s mnemonic to the spec, “The system must allow the user to drop up to four selected XYZ Objects on a target screen area at the same time”, we may get the following boundary tests.
Application logic (i.e., business rules)
- Drop one XYZ object.
- Drop four XYZ objects.
- Attempt to drop five XYZ objects.
- Attempt to drop one ABC object.
- Attempt to drop a selection set of three XYZ objects and one ABC object.
- Can the objects get to the target screen area without being dropped? If so, try it.
- Attempt to drop 1,000,000 XYZ objects (there may be a memory constraint in just evaluating the objects)
- Refresh the screen. Are the four dropped XYZ objects still in the target screen area?
- Reopen the AUT. Are the four dropped XYZ objects still in the target screen area?
What other boundary tests can you think of and which AIM category do they fit into?
Sorry about the blog lapse. I just got back from STPCon and a 10-day vacation in the Pacific Northwest.
I’ll share my STPCon ideas and takeaways in future posts. But first I have to mention my STPCon highlight.
During the conference, keynote speaker James Bach invited me to participate in an experiment to see how quickly testers can learn testing techniques. He approached me before lunch and I joined him and another tester, Michael Richardson, in the hotel lobby. James asked us each to play a different game with him (simultaneously). Michael played the “Dice Game”, in which Michael had to roll random dice of Michael’s choosing, from several different dice styles, to determine a pattern that James made up.
I played a game much like “20 Questions”, in which I had to explain what happened based on vague information I was given. I could ask as many Yes/No questions as I wanted. During the games, James coached us on different techniques. After about 45 minutes, Michael jumped in and bailed me out. Afterwards, James asked me to play again with a different premise. I was able to crack the second premise in less than 5 minutes. I would like to think I actually learned something from my first, painful, attempt.
These games share similarities to software testing because they require skillful elimination of infinite variables. Some of the techniques James suggested were:
- Focus then defocus. Sometimes you should target a specific detail. Other times, you should step back and think less narrow. For me, defocus was the most important approach.
- Forward then backward. Search for evidence to backup a theory. Then search for a theory based on evidence you have determined.
- Dumb questions can lead to interesting answers. Don’t be afraid to ask as many questions as you can, even if they are seemingly stupid questions.
- Flat lining. If you are getting nowhere with a technique, it’s time to stop. Try a different type of testing.
Later, James asked Michael to think up a challenging dice game pattern. James was able to solve it in about 30 seconds using only one die. Obviously, having played before, he knew it would eliminate most of the variables. I just used this exact idea today to understand a software behavior problem I initially thought complex.
After the stress of performing these challenges with James, we sat back and enjoyed a conversation I will not forget. James was more personable and less judgemental than I expected. Despite being a high school dropout, he is currently writing a book about learning (not about testing). I also thought it cool that during STPCon he headed across the street to the Boston Public Library and began reading from various history books. He was trying to determine why people with power would ever choose to give it up. I guess he’s got more on his mind than just testing.
For me, spending time with James was a huge privilege and I was grateful for the opportunity to discuss testing with him, as well as getting to know the person behind the celebrity.
Note: Michael Richardson was also an interesting guy. He once did QA for a fish company and was literally smacked in the face with a large fish due to poor testing.