Egads! It’s been several months since my last post. Where have I been?
I’ve transitioned to a new company and an exciting new role as Principal Test Architect. After spending months trying to understand how my new company operates, I am beginning to get a handle on how we might improve testing.
In addition to my work transition, myself and each member of my family have just synchronously suffered through this year’s nasty flu, and then another round of stomach flu shortly thereafter. The joys of daycare…
And finally, now that my son, Haakon, has arrived, I’ve been adjusting to my new life with two young children. 1 + 1 <> 2.
It has been a rough winter.
But alas, my brain is once again telling me, “Oh, that would make a nice blog post”. So let’s get this thing going again!
“Exploring” vs. Checking Almost Did It For Me
4 comments Posted by Eric Jacobson at Thursday, August 29, 2013After watching Elisabeth Hendrickson’s CAST 2012 Keynote (I think), I briefly fell in love with her version of the “checking vs. testing” terminology. She says “checking vs. exploring” instead.
I love the simplicity. I imagine when used in public, most people can follow; “exploring” is a testing activity that can only be performed by humans, “checking” is a testing activity that is best performed by machines. And the beauty of said terms is…they’re both testing!!! Yes, automation engineers, all the cool stuff you build can still be called testing.
The thing I’ve always found awkward about the Michael Bolton/James Bach “checking vs. testing” terminology, is accepting that tests or testing can NOT be automated. Hendrickson’s version seems void of said awkwardness. She just says, “exploring” can NOT be automated…well sure, much easier to swallow.
The problem, I thought, was James and Michael’s testing definition was too narrow. Surely it could be expanded to include machine checks as testing. Thus, I set out to find common “Testing” definitions that would support my theory. And much to my surprise, I could not. All the definitions (e.g., Merriam-Webster) I read, described testing as an open-ended investigation…in other words, something that can NOT be automated.
Finally, I have to admit, Hendrickson’s term, “exploring” can be ambiguous. It might get confused with Exploratory Testing, which is a specific structured approach, as opposed to Ad Hoc testing, which is unstructured. Hmmm…Elisabeth, if you’re out there, I’m happy to listen to your definitions, perhaps you will change my mind.
So it seems, just when I thought I could finally wiggle away from their painful terminology, I am now squarely back in the James and Michael camp when it comes to “checking vs. testing”.
…Dang!
'Twas The Night Before Prod Release
1 comments Posted by Eric Jacobson at Tuesday, December 22, 2009'Twas the night before prod release, when all through the build,
Not a unit test was failing, the developers were thrilled;
The release notes were emailed to users with care,
In hopes that new features, soon would be there;
The BA was nestled all snug in her chair,
With visions of magnitude ready to share;
And I on my QA box, trying not to be stressed,
Had just settled down for a last minute’s test;
When during my test there arose such a clatter,
I opened the error log to see what was the matter;
And what to my wondering eyes should appear,
But an unhandled fault, with its wording unclear;
When I showed it to dev, and he gave me a shrug,
I knew in a moment it must be a bug;
More rapid than eagles, dev’s cursing it came,
And he shouted at testers, and called us by name;
“Now, Jacobson! Now, Zacek! Now, Whiteside and Surapaneni!
On, Cagle! On, Addepalli, on Chang and Damidi!
Stop finding bugs in my web service call!
Now dash away! dash away! dash away all!"
And then, in a twinkling, I heard from the hall,
The tester who showed me, scripts can’t test it all;
As I rejected the build, and was turning around,
Into my cube, James Bach came with a bound;
He was dressed really plain, in a baseball-like cap,
And he patted my back for exploring my app;
He had a big white board and a little round belly,
That shook when he diagrammed like a bowlful of jelly;
He was chubby and plump, a right jolly old elf,
And he laughed when he saw RST on my shelf;
Then he spoke about testing, going straight to his work,
And attempted traspection, though he seemed like a jerk;
His eyes -- how they twinkled! his dice games, how merry!
He questioned and quizzed me and that part was scary!
He told me of lessons he taught at STARWEST,
Made an SBT charter and then told me to test;
Then I heard him exclaim, ere he walked out of sight
“Happy testing to all! ...just remember I’m right!”
The Importance of Unimportant Follow-On Bugs
1 comments Posted by Eric Jacobson at Thursday, September 10, 2009Think of a bug…any bug. Call it BugA. Now try to think of other bugs that could be caused by BugA. Those other bugs are what I call “Follow-On Bugs”. Now forget about those other bugs. Instead, go find BugB.
I first heard Michael Hunter (AKA “Micahel”, “The Braidy Tester”) use the similar term, “Follow-on Failures”, in a blog post. Ever since, I’ve used the term “Follow-On Bugs”, though I never hear other testers discuss these. If I’m missing a better term for these, let me know. “Down-stream bugs” is not a bad term either.
Whatever we call these, I firmly believe a key to knowing which tests to execute in the current build, is to be aware of follow-on bugs. Don’t log them. The more knowledgeable you become about your AUT, the better you will identify follow-on bugs. If you’re not sure, ask your devs.
Good testers have more tests than time to execute them. Follow-on bugs may waste time. I share more detail about this in my testing new features faster post.
I’ve seen testers get into a zone where they keep logging follow-on bugs into the bug tracking system. This is fine if there are no other important tests left. However, I’ll bet there are. Bugs that will indirectly get fixed by other bugs mostly just create administrative work, which subtracts from our available time to code and test.
Tuesday night I had the pleasure of dining with famed Canadian tester Adam Goucher at Figo Pasta in the Atlanta suburb of Vinings. Adam was in town for training and looking for other testers to meet. Joining us was soon-to-be-famed Marlena Compton, another Atlanta-based tester like myself (and long time caver friend of mine).
Like other testers from Toronto I have met (e.g., Michael Bolton, Adam White), Adam Goucher was inspirational, full of good ideas, fond of debate, and a real pleasure to talk to. I kick myself for not taking notes but I didn’t want to look like an A-hole across the table from him.
Here are some of last night’s discussions I enjoyed… (most of these are Adam's opinions or advises)
- Determine what type of testing you are an expert on and teach it. He claims to be an expert on testing for international language compatibility (or something like that). He made me squirm attempting to tell him what I was an expert on...I'll have to work on this.
- All testers should be able to read code.
- Kanban flavor of Agile.
- When asked about software testing career paths, he says think hard, decide which you prefer, helping other testers to test or executing tests on your own. He prefers the former.
- A good test team lead should learn a little bit about everything that needs to be tested. This will help the team lead stay in touch with the team and provide backup support when a tester is out of the office.
- Start a local tester club that meets every month over dinner and beer to discuss testing.
- Pick some themes for your test blog (Adam’s is learning about testing through sports, and poor leadership is an impediment to better quality).
- Join AST. Take the free training. Talk at CAST and embrace the arguments against your talk.
- Tester politics. They exist. Adam experienced them first hand while working on his book.
- Four schools of testing, who fits where? What do these schools tell us?
- The latest happenings with James Bach and James Whittaker.
- Rapid Software Testing training and how much it costs (I remember it being inexpensive and worth every penny).
- Folklore-ish release to prod success stories (Flickr having some kind of record for releasing 56 regression tested builds to prod in one day).
- He nearly convinced me that, my theory of successful continuous sustained regression testing being impossible with fixed software additions, was flawed. I’ll have to post it later.
- Horses are expensive pets. (you’ll have to ask Adam about this)
- He informed me that half of all doctors are less qualified than the top 50%.
- Read test-related books (e.g., Blink, Practical Unit Testing or something…I should have taken notes. Sheesh, I guess I wasn't interested in reading the books. Shame on me. Maybe Adam will respond with his favorite test-related books).
- The fastest way to renew your passport. Surely there were some missed test scenarios in Adam's all-night struggle to get to Atlanta.
I'm sure I forgot lots of juicy stuff, but that's what I remember now. Adam inspired me and I have several ideas to experiment with. I'll be posting on these in the future. Thanks, Adam!
Am I The Only Tester With No Time To Test?
12 comments Posted by Eric Jacobson at Wednesday, May 27, 2009During a recent phone call with Adam White, he said something I can’t stop thinking about. Adam recently took his test team through an exercise to track how much of their day was actually spent testing. The results were scary. Then Adam said it, “If you’re not operating the product, you’re not testing”…I can’t get that out of my head.
Each day I find myself falling behind on tests I wanted to execute. Then I typically fulfill one of the following obligations:
- Requirement walkthrough meetings
- System design meetings
- Write test cases
- Test case review meetings
- Creating test data and preparing for a test
- Troubleshooting build issues
- Writing detailed bug reports
- Bug review meetings
- Meetings with devs b/c tester doesn’t understand implementation
- Meetings with devs b/c developer doesn’t understand bug
- Meetings with business b/c requirement gaps are discovered
- Collecting and report quality metrics
- Managing official tickets to push bits between various environments and satisfy SOX compliancy
- Update status and other values of tested requirements, test case, and bug entities
- Attempt to capture executed exploratory tests
- Responding to important emails (arriving multiple per minute)
Nope, I don’t see "testing" anywhere in that list. Testing is what I attempt to squeeze in everyday between this other stuff. I want to change this. Any suggestions? Can anyone relate?
The first software test blogger I read was The Braidy Tester. He is still my favorite.
I borrowed from his test automation framework, took Michael Bolton's Rapid Software Testing course based on his suggestion, and laugh at his testing song satires. But most of all, The Braidy Tester (AKA "Micahel" or "Michael Hunter") inspired me to think more about testing and how to improve it.
So when he asked to interview me for his Book of Testing series I was thrilled. I realize this post is nothing more than an attempt to rub my own ego, but perhaps my answers to the questions will help you think about your own answers. Here it is...
http://www.ddj.com/blog/debugblog/archives/2008/01/five_questions_43.html
If your company doesn't have a BBTest Assistant license, Microsoft's free Windows Media Encoder in its Windows Media Encoder 9 Series download has an awesome screen capture to video tool and a wizard that does all the setup for you. I've been having fun attaching videos to my bug reports and since they include even more info than still screen captures, they'll hopfully increase bug turn around.
Here's a sample video of a little MS Word bug James Whittaker describes in his book "How to Break Software". (The message indicating the index columns must be between 0 and 4 displays twice.)
I love reading software tester blogs but sometimes I can't relate. Many of the topics are too academic to have any practical value for my daily testing struggles. Test blogs and forums often discuss test approaches (e.g., manual vs. automated, scripted vs. exploratory). These are interesting topics but many are outside my scope of control. I can influence my managers to some extent, but I also have to operate within the processes and tools they dictate.
I work for a QA group in a large company that is very metric hungry when it comes to testing. Most of my managers love manual detailed test cases, requirements coverage, and other practices that create administrative work for us testers, thereby reducing our available time for actual testing. In practice, I think most of my peers test the way I do, attacking a feature with an exploratory type approach, then updating execution results of a handful of test cases that give a vague and superficial representation of what was tested.
Recently, some of my managers have also decided we should attempt to automate most of our tests, which from their perspective, seems realistic and should free up our time because we can just fire off automated tests instead of wasting time with manual execution. One manager tells of how in the good old days when he was a tester, he would launch his automation suite and take the rest of the day off. This romanticized version of test automation is far from anything I can fathom...and I think he may be exaggerating.
So I'm left in the awkward position of trying to be a valuable tester from my manager's perspective but also from the perspective of the software team I support. My daily struggles are typically not very romantic and my ideas are not groundbreaking. However, I do feel myself improving with each question I answer. And I don't think I'm the only tester to waste energy on questions like these...
- Did someone log this already?
- How much more time should I spend investigating this bug?
- Should I reopen the bug or log a new one?
- Is it a bug?
- Should I be embarrassed using a stop watch to performance test the Login screen?
- Was that test worth automating?
- Is it ready to test?
- Should I log it without repro tests?
- Am I bored?
- Am I valuable?
- Did I test this already?
- Is my goal to find as many bugs as possible?
- Who do I really serve?
- Do my bugs suck?
- Is my job lame?
- Can I log a bug because I hate the way the UI looks?
- Am I irritated with my AUT?
- When is my job done?
- Did my devs smoke crack while they wrote this?
- Does anyone really get performance testing?
- Does my pride hurt when my bugs get rejected?
- What the hell is this feature supposed to do?
- Should I be spending time logging bugs on the hourglass pointers that don't trigger?
- Do I posses any special abilities or am I just an A-hole with the patience to submit another fake order for the 300th time?

RSS