Is there a name for this? If not, I’m going to call it a “fire drill test”.
- A fire drill test would typically not be automated because it will probably only be used once.
- A fire drill test informs product design so it may be worth executing early.
- A fire drill test might be a good test candidate to delegated to a project team programmer.
Fire drill test examples:
- Our product ingests files from an ftp site daily. What if the files are not available for three days? Can our product catch up gracefully?
- Our product outputs a file to a shared directory. What if someone removes write permission to the shared directory for our product?
- Our product uses a nightly job to process data? If the nightly job fails due to off-hour server maintenance, how will we know? How will we recover?
- Our product displays data from an external web service. What happens if the web service is down?
Too often, us testers have so much functional testing to do, we overlook the non-functional testing or save it for the end. If we give these non-functional tests a catchy name like “Fire Drill Test”, maybe it will help us remember them during test brainstorming.
I Attended John Stevenson’s great talk and workshop at Monday night’s Software Testing Club Atlanta. I’m happy to report the meeting had about 15 in-person attendees and zero virtual attendees. Maybe someone read my post.
John is a thoughtful and passionate tester. He managed to hold our attention for 3 hours! Here are the highlights from my notes:
- The human brain can store 3TBs of information; This is only 1 millionth of the new information released on the internet every day.
- Over stimulation leads to mental illness.
- John showed us a picture and asked what we saw. We saw a tree, flowers, the sun, etc. Then John told us the picture was randomly generated. The point? People see patterns even when they don’t exist. Presumably to make sense out of information overload.
- Don’t tell your testing stories with numbers. “A statistician drowned while crossing a river with an average depth of 3 feet”; Isn’t that like, “99 percent of my tests passed”?
- Don’t be a tester that waits until testing “is done” to communicate the results. Communicate the test results you collected today? I love this and plan to blog about it.
- Testers, stop following the same routines. Try doing something different. You might end up discovering new information.
- Testers, stop hiding what you do. Get better at transparency and explaining your testing. Put your tests on a public wiki.
- Critical thinking takes practice. It is a skill.
- “The Pause”. Huh? Really? So? Great critical thinking model explained in brief here.
- A model for skepticism. FiLCHeRS.
- If you challenge someone’s view, be aware of respecting it.
- Ways to deal with information overload:
- Slow down.
- Don’t over commit.
- Don’t fear mistakes. But do learn from them. This is how children learn. Play.
- (Testing specific) Make your testing commitments short so you can throw them away without losing much. Don’t write some elaborate test that takes a week to write because it just might turn out to be the wrong test.
- You spend a 3rd of your life at work. Figure out how to enjoy work.
- John led us through a series of group activities including the following:
- Playing Disruptus to practice creative thinking. (i.e., playing Scamper.)
- Playing Story War to practice bug advocacy.
- Determining if the 5 test phases (Documentation, Planning, Execution, Analysis, Reporting) each use Creative Thinking or Critical thinking.
- Books John referenced that I would like to read:
- The Signal and the Noise – Nate Silver
- Thinking Fast and Slow – Daniel Kahneman
- You are Not So Smart – David McRaney
At this week’s metric themed Atlanta Scrum User’s Group meetup, I asked the audience if they knew of any metrics (that could not be gamed) that could trigger rewards for development teams. The reaction was as if I had just praised Planned Parenthood at a Pro-life rally…everyone talking over each other to convince me I was wrong to even ask.
The facilitator later rewarded me with a door prize for the most controversial question. What?
Maybe my development team and I are on a different planet than the Agile-istas I encountered last night. Because we are currently doing what I proposed, and it doesn’t appear to be causing any harm.
Currently, if 135 story points are delivered in the prior month AND no showstopper production bugs were discovered, everyone on the team gets a free half-day-off to use as they see fit. We’ve achieved it twice in the past year. The most enthusiastic part of each retrospective is to observe the prior months metrics and determine if we reached our “stretch goal”. It’s…fun. Let me repeat that. It’s actually fun to reward yourself for extraordinary work.
Last night’s question was part of a quest I’ve been on to find a better reward trigger. Throughput and Quality is what we were aiming for. And I think we’ve gotten close. I would like to find a better metric than Velocity, however, because story point estimation is fuzzy. If I could easily measure “customer delight”, I would.
At the meeting, I learned about the Class of Service metric. And I’m mulling over the idea of suggesting a “Dev Forward” % stretch goal for a given time period.
But what is this nerve I keep touching about rewards for good work?
On weekends, when I perform an extraordinary task around the house like getting up on the roof to repair a leak, fixing an electrical issue, constructing built-in furniture to solve a space problem, finishing a particularly large batch of “Thank You” cards, or whatever…I like to reward myself with a beer, buying a new power tool, relaxing in front of the TV, taking a long hot shower, etc.
Rewards rock. What’s wrong with treating ourselves at work too?
Warning: This has very little to do with testing.
Additional Warning: I’m about to gripe.
I attended the 3rd Software Testing Club Atlanta meetup Wednesday. Some of the meeting was spent fiddling with a virtual task board, attempting to accommodate the local people who dialed in to the meeting.
IT is currently crazy about low tech dashboards (e.g., sticky notes on a wall). But we keep trying to virtualize them. IMO, virtualizing stickies on a wall is silly. The purpose is to huddle around, in-person, and ditch the complicated software that so often wastes more time than it saves.
IMO, the whole purpose of a local testing club that meets over beer and pizza is to meet over beer and pizza...in person, and engage in the kind of efficient discussion that is best done in person. Anything else defeats the purpose of a “local” testing club. If I wanted to dial in and talk about testing over the phone, it wouldn’t have to be with local people.
I’m sad to see in-person meetings increasingly get replaced by this. But IMO, joining virtual people to real-life meetings, can be even worse. Either make everyone virtual or make everyone meet physically.
Yes, I’m a virtual meeting curmudgeon. I accept that virtual connections have their advantages and I allow my team to work from home as frequently as three days a week on a regular basis. But I still firmly believe, you can’t beat good old fashioned, real-life, in-person discussions.
Yesterday, a tester asked me how to get promoted. I said, “start learning about your craft”. They said, "but all the testers I know don't learn anything from testing conferences or books".
And this is what makes testing such a cool career choice for some of us! It's full of apathetic under-achievers. So if you want to be extraordinary, it's relatively easy. You have little competition! Come back from a conference and attempt to implement a mere three ideas and you've probably advanced testing at your organization more than any time in the past.
Why is this? Maybe because we fell into this career by accident. Maybe because it's a newish career with few leading experts. Maybe it’s because we can still make decent money on a software development team by merely trying to act like a user. I don’t know. What I do know is, the more I study testing, the more I love my job, and the more promotions I get.
This crumby little humble testing blog made it in a list of the worlds top 50 testing blogs for several years. It’s not because I’m awesome. It’s because there weren’t that many testing blogs!
Put a little effort into learning more about testing. Maybe something good will happen.
I would much rather test than create test documents. Adam White once told me, “If you’re not operating the product, you’re not testing”.
It’s soooooo easy to skip all documentation and dive right in to the testing. It normally results in productive testing and nobody misses the documents. Until…three years later, when the prog makes a little change to a module that hasn’t been tested since. The team says the change is high risk and asks you which tests you executed three year ago and how long it took.
Fair questions. I think we, as testers, should be able to answer. Even the most minimal test documentation (e.g., test fragments written in notepad) should be able to answer those questions.
If we can’t answer relatively quickly, we may want to consider recording better test documentation.
Warning: This is mostly a narcissistic post that will add little value to the testing community.
I’ve been pretty depressed about my proposal not getting picked for Let’s Test 2014. Each of my proposals have been picked for STPCon and STAR over the past three years; I guess I was getting cocky. I put all my eggs in one basket and only proposed to Let’s Test. My wife and I were planning to make a vacation out of it…our first trip to Scandinavia together.
Despite my rejection, my VP graciously offered to send me as an attendant but I wallowed in my own self pity and turned her down. In fact, I decided not to attend any test conferences in 2014. Pretty bitter, huh?
I know I could have pulled off a kick-ass talk with the fairly original and edgy topic I submitted. I dropped names. I got referrals from the right people. My topic fit the conference theme perfectly, IMO. So why didn’t I make the cut?
The Let’s Test program chairs have not responded to my request for “what I could have done differently to get picked”. Lee Copeland, the STAR program chair was always helpful in that respect. But I don’t blame the Let’s Test program chairs. Apparently program chairs have an exhausting job and they get requests for feedback from hundreds of rejected speakers.
Fortunately, my mentor and friend, Michael Bolton read my proposal and gave me some good honest feedback on why I didn’t get picked. He summarized his feedback into three points which I’ll paraphrase:
- A successful pitch to Let’s Test involves positioning your talk right in the strike zone of an experience report. You seemed to leave out the teensy, weensy little detail that you’re an N-year test manager at Turner, and that you’re telling a story about that here.
- Apropos of that, tell us about the story that you’re going to tell. You’ve got a bunch of points listed out, but they seem disjointed and the through line isn’t clear to me. For example, what does the second point have to do with the first? The fourth with the third?
- Drop the dopey idea of “learning objectives”, which is far less important at Let’s Test than it may be at other conferences.
So there it is. One of my big testing-related failure stories. Wish me luck next year when it give it another go, for Let’s Test 2015…man that seems a long ways off.