"Agile"? Ohhhh, what is this "agile" stuff you speak of?

Look at the topics of most testing/dev conferences, webinars, blogs or tweets. Can you find the word “Agile” in there? I’ll bet you can. I was excited about it five years ago and I thought it would have a huge impact on my software testing challenges. It has not.

Testers still have to look at a chunk of software and figure out how to test it. This is still the most challenging activity we face everyday. When we find a problem, it doesn’t matter if you want us to log a bug, not close a story, stick it on the wall with a Post-It Note, or whisper it in a developer’s ear. The same testing must occur to find the problem. Everything else is what you do before or after the test.

The grass always looks greener on the other side of the fence. But once you hop the fence you’ll realize it is just as brown.

After reading Adam Goucher’s review of Malcolm Gladwell’s book, Blink, and hearing it recommended by other testers, I finally read it.

Some people (like Adam) are good at learning about testing by drawing parallels from non-testing material (like dirt bike magazines). I guess I’m not as good at this. Although, I did enjoy Blink, it certainly did not provide me with as many “ah ha!” testing moments as I’ve heard other testers suggest. I learned a bit about marketing, racism, and health care, but not too much about testing. And I felt like many of the stories and studies were things I already knew (sorry, I'm not being very humble).

In addition to Adam's test-related discoveries, here are a couple additional ones I scraped up:

  • Although, it was an awesome breakthrough in office chairs, and completely functional, people hated the Herman Miller Aeron Chairs. At first, the chairs didn’t sell. What did people hate? They hated the way they looked. People thought they looked flimsy and not very executive-like. After several cosmetic changes, people began accepting the chairs and now the chairs are hugely popular. Sadly, this is how users approach new software. No matter how efficient, they want the UI to look and feel a way they are familiar with. As testers, we may want to point out areas we think users will dislike. We can determine these by staying in touch with our own first time reactions.

  • Blink describes an experiment where in one case, customers at a grocery store were offered two samples of jam. In a second case, customers were offered about 10 samples of jam. Which case do you think sold more jam? The first case. When people are given too much information, it takes too much work for them to make decisions. What does this have to do with testing? From a usability standpoint, testers can identify functionality that may overload users with too many decisions at one time. The iPhone got this one right.
We always hear the complaint that testers who don't read books must not be any good at testing. For fear of falling into this category, I've recently read some other books that are actually about software testing. These books have not been as useful as the ideas I stumble upon, myself, while I'm in the trenches. But perhaps knowing these books are unsatisfying is helpful because I know there are no easy answers out there for the problems I face everyday.

A day went by without finding bugs in my AUT.

When I got home, as if desperate for bugs, I noticed one in the kitchen. I wanted to squash it but I know better. I controlled myself. I stood back and observed the bug (…an ant). I wondered how it got there. If one bug got in, there would probably be more. I noticed four more bugs over by the window. Ah hah! I’ll focus my efforts near the window. Perhaps I could draw out more bugs. I’ll give these bugs a reason to show up; several drops of tasty ant poison.



Ah yes, here they come, from under the window. Now that I know where they came from, I can patch the hole in the window to prevent further infestations from that oversight. In the meantime, these bugs will happily bring the poison back to their nest and will probably not return for awhile. Nevertheless, every so often, I will check.

Successful test automation is the elephant in the room for many testers. We all want to do it because manual testing is hard, our manager and devs would think we were bad-ass, and…oh yeah, some of us believe it would improve our AUT quality. We fantasize about triggering our automated test stack and going home, while the manual testers toil away. We would even let them kiss the tips of our fingers as we walked out the door.

…sounds good.

So we (testers) make an attempt at automation, exaggerate the success, then eventually feel like losers. We spend more time trying to get the darn thing to run unattended and stop flagging false bugs, while the quality of our tests takes a back seat and our available test time shrinks.

We were testing one product. Now we are testing two.

The two obvious problems are, 1.) Most of us are not developers. 2.) Writing a program to test another program is more difficult than writing the original program. ...Ah yes, a match made in heaven!

I watched an automated testing webinar last week. It was more honest than I expected. The claim was, to be successful at test automation the team should not expect existing testers to start automating tests. Instead, a new team of developers should be added to automate tests that testers” write. This new team would have their own requirement reviews, manage their own code base, and have their own testers to test their test automation stack. This does not sound cheap!

While watching this webinar, something occurred to me. Maybe we don’t need test automation. Why do I think this? Simple. Because somehow my team is managing to release successful software to the company without it. There is no test automation team on our payroll. Regression testing is spotty at best, yet somehow our team is considered a model of success within the company. How is this possible when every other test tool spam email or blog post I read makes some reference to test automation?

In my case, I believe a few things have made this possible:

  • The devs are talented and organized enough to minimize the amount of stuff they break with new builds. This makes regression testing less important for us testers.
  • The BAs are talented enough to understand how new features impact existing features.
  • The testers are talented enough to know where to look. And they work closely with devs and BAs to determine how stuff should work.
  • The user support team is highly accessible to users, knowledgeable about the AUT and the business, and works closely with the BAs/devs/testers to get the right prod bugs patched quickly. The entire team is committed to serving the users.
  • The users are sophisticated enough to communicate bug details and use workarounds when waiting on fixes. The users like us because we make their jobs easier. The users want us to succeed so we can keep making their jobs easier.
  • The possibility of prod bugs resulting in death, loss of customers, or other massive financial loss is slim to none.
I suspect a great deal of software teams are similar to mine. I'm interested in hearing from other software teams that do not depend on tester-driven test automation.

I do use test automation to help me with one of my simple AUTs which happens to lend itself to automated testing (see JART). However, from my experiences, there are few apps that are easy to automate with simple checks.



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.