5626669815_a1f9b1eec5_z

Every once in a while, progs amaze me by casually offering some off-the-cuff solution to my major testing headaches.

I was trying to recreate a complex user scenario involving precise timing.  I needed a way to make a service hang (blocking other services), sneak some specific test actions through in the meantime, then unhang the service.  After assuming this would be way too complicated for me, my prog offered, “just use a SQL hint to put an exclusive lock on the table”.

A SQL hint is an addition to the query that instructs the database engine to do something extra, overriding its normal decisions.  The tabblock hint, wrapped in a transaction, allows you to put an exclusive lock on a table, preventing other transactions from reading or modifying said table.  I’m using MS SQL but Oracle supports a similar technique.

Here is how it works in a generic test example:

  1. State A exists.
  2. Lock the table:

    begin TRANSACTION
    UPDATE        tblCustomers
    WITH (TABLOCK)
    SET           Name = 'Fred'
    WHERE        (ID = 10)

    Note: The transaction remains open.  The update statement is irrelevant because we are going to roll it back.
  3. Trigger the action you want to hang.  For example: maybe the current UI state is ready for female customers. You trigger a service that returns female customers from tblCustomers to display them on the UI.  Take your time, it won’t complete due to the tablock.
  4. Perform the action you are trying to sneak in.  For example: maybe you  change the UI to expect male customers.
  5. Now State B exists instead of State A.
  6. Unlock the table:

    ROLLBACK TRANSACTION

    Note: execute the above statement in the same query session as the query in step 1 was executed.  The action that was hanging in step 3 completes, and in this example, female customers attempt to load into a screen expecting male customers.

So the next time you have a test idea that is too complex to execute in real time, try doing it in bullet time (using a tablock hint to slow things down.)

In Part 1 I focused on removing misleading details and unnecessary repro steps from bug reports.  I tried to make the case that a tester’s job is to narrow a bug’s repro steps down to only those actions or data that are actually required to experience the bug.

Now let’s discuss the exceptions and how to handle them.  Here are two reasons to include non-required data in bug reports:

  • It is not feasible to rule out certain data as relevant.
  • It is helpful for the person following the repro steps to reuse data already identified by the repro steps author, to save time.

IMO, the repro steps (and the bug report itself) should reflect that said data is or may be non-required.  I do this by providing the extra data in a comment for a specific repro step.  For example:

  1. Create a new order.
  2. Add at least two line items to the new order. (I added Pens and Pencils but it appears that any two line items cause the bug)
  3. Submit the new order.

Expected Results: No errors are thrown.
Actual Results: “Object reference not found” error is thrown.

In some cases, there may have been significant searching to find data in a specific state.  One can save time for the bug report reader by providing tips on existing data.  For example, maybe the bug only occurs for orders in an “On Hold” state:

  1. Open an “On Hold” order.  (I used Order# 10054)
  2. Add at least two line items to the new order.
  3. Submit the new order.

Expected Results: No errors are thrown.
Actual Results: “Object reference not found” error is thrown.

Again, the core repro steps more or less capture the relevant data pertaining to the bug.  The notes, speed up the repro process.  Look at how the opposite approach may mislead:

  1. Open Order# 10054.
  2. Add at least two line items to the new order.
  3. Submit the new order.

Expected Results: No errors are thrown.
Actual Results: “Object reference not found” error is thrown.

The above instance makes the bug look like it is just a problem with order# 10054.  A skilled tester should have figured out the “On Hold” order state is key.

In conclusion, start with some repro steps.  Question each of your steps until you narrow them down to only the required data.  Then add any notes to aid the reader if necessary, being careful to offset those notes from the core steps.  That’s what I do.  What’s your approach?

Congratulations, you just found a bug in an office supply commerce application!  It appears that any time you submit an order with more than one line item, an “object reference not found” error is thrown.  Cool!  Here are the repro steps you logged in the bug report:

  1. Create a new order.
  2. Add Pencils to the new order.
  3. Add Pens to the new order.
  4. Select UPS for shipping method.
  5. Select Credit Card for payment method.
  6. Submit the new order.

Expected Results: No errors are thrown.
Actual Results: “Object reference not found” error is thrown.

Those are pretty solid repro steps.  Every time one performs them, the Actual Results occur.  Nevertheless, do you see any problems with the above repro steps?

Hmmmmm….

What if you depend on just the repro steps to understand the bug?

Does this bug only repro if pens and pencils are ordered, if the UPS shipping method is selected, if the Credit Card payment method is selected?

Those details capture what the tester did, but might they mislead?  Anytime you add specific data to your repro steps, you may be implying, to someone, that said data is required to repro the bug (e.g., perhaps the “pens” data in our DB is bad).  A more skilled tester may log the bug like this:

  1. Create a new order.
  2. Add at least two line items to the new order.
  3. Select a shipping method.
  4. Select a payment method.
  5. Submit the new order.

Okay, that seems considerably better to me.  However, we can still improve it.  Is it reasonable to assume the programmers, testers, and business analysts on our project team know how to submit an order?  Do they already understand that in order to submit an order, one must specify shipping and payment methods?  Yes!

Thus, an even more skilled tester may end up with these repro steps:

  1. Create a new order.
  2. Add at least two line items to the new order.
  3. Submit the new order.

There is nothing extra to cloud our interpretation.  The steps look so minimal and easy, anyone could do them!  Now that’s a set of repro steps we can be proud of.

DSC03843

This clearly makes her the coolest kid at daycare.

If I had Josephine 1000 years ago, she would probably have become a software tester like her dad.  Back then, trades often remained in the family.  But in this era, she can be whatever she wants to be when she grows up.

DSC03858

I doubt she will become a software tester.  However, I will teach her how important software testers are.  Josie will grow up with Generation Z, a generation that will use software for almost everything.  The first appliances she buys will have sophisticated software running them.  She will probably be able to see if she needs more eggs by logging in to a virtual version of her refrigerator from work. 

And why do you think that software will work?  Because of testers!

Josie will be able to process information, herself, at lightning speeds.  So I figure, if I start early enough, she can start making suggestions to improve the way we test. 

But first she has to learn to talk.

Do your kids appreciate your testing job?

For most of us, testing for “coolness” is not at the top of our quality list.  Our users don’t have to buy what we test.  Instead, they get forced to use it by their employer.  Nevertheless, coolness can’t hurt. 

As far as testing for it…good luck.  It does not appear to be as straightforward as some may think.

I attended a mini-UX Conference earlier this week and saw Karen Holtzblatt, CEO and founder of InContext, speak.  Her keynote was the highlight of the conference for me, mostly because she was fun to watch.  She described the findings of 90 interviews and 2000 survey results, where her company asked people to show them “cool” things and explain why they considered them cool.

Her conclusion was that software aesthetics are way less important than the following four aspects:

  1. Accomplishments – When using your software, people need to feel a sense of accomplishment without disrupting the momentum of their lives.  They need to feel like they are getting something done that was otherwise difficult.  They need to do this without giving up any part of their life.  Example: Can they accomplish something while waiting in line?
  2. Connection – When using your software, they should be motivated to connect with people they actually care about (e.g., not Facebook friends).  These connections should be enriched in some manner.  Example: Were they able to share it with Mom?  Did they talk about it over Thanksgiving dinner?
  3. Identity - When using your software, they should feel like they’re not alone.  They should be asking themselves, “Who am I?”, “Do I fit in with these other people?”.  They should be able to share their identity with joy.
  4. Sensation – When using your software, they should experience a core sensory pleasure.  Examples: Can they interact with it in a fresh way via some new interface?  Can they see or hear something delightful?

Here are a few other notes I took:

  • Modern users have no tolerance for anything but the most amazing experience.
  • The app should help them get from thought to action, nothing in between.
  • Users expect software to gather all the data they need and think for them.

I guess maybe I’ll think twice the next time I feel like saying, “just publish the user procedures, they’ll get it eventually”.

Last week we had an awesome Tester Lightning Talk session here at my company.  Topics included:

  • Mind Maps
  • Cross-Browser Test Emulation
  • How to Bribe Your Developers
  • Performance Testing Defined
  • Managing Multiple Agile Projects
  • Integration Testing Sans Personal Pronouns
  • Turning VSTS Test Results Files Into Test Reports
  • Getting Back to Work After Leave
  • Black Swans And Why Testers Should Care

The “Performance Testing Defined” talk inspired me to put my own twist on it and blog.  Here goes…

 

PerformanceTestUmbrella

 

The terms in the above graphic are often misused and interchanged.  I will paraphrase from my lightning talk notes:

Baseline Testing – Less users than we expect in prod.  This is like when manual testers perform a user scenario and use a stopwatch to time it.  It could also be an automated load test where we are using less than the expected number of users to generate load.
Load Testing – # of users we expect in prod.  Real-world scenario.  Realistic.
Stress Testing – More users than we expect in prod. Obscene amount of users.  Used to determine the breaking point.  After said test, the tester will be able to say “With more than 2000 users, the system starts to drag.  With 5000 users, the system crashes.”
Stability Testing – Run the test continuously over a period of time (e.g., 24 hours, 1 week) to see if something happens.  For example, you may find a memory leak.
Spike Testing – Think TicketMaster.  What happens to your system when it suddenly jumps from 100 simultaneous users to 5000 simultaneous users for a short period of time?

There.  Now you can talk like a performance tester and help your team discuss their needs. 

As far as building these tests, at the most basic level, you really only need one check (AKA automated test).  Said check should simulate something user-like, if possible.  In the non-web-based world (which I live in) this check may be one or more service calls.  In the non-web-based world, you probably do not want to use an automated check at the UI level; you would need an army of clients to load test.  After all, your UI will only have a load of 1 user, right?  What you’re concerned with is how the servers handle the load.  So your check need only be concerned with the performance before the payload gets handed back to the client.

The check is probably the most challenging part of Performance testing.  Once you have your check, the economies of scale begin.  You can use that same check as the guts for most of your performance testing.  The main variables in each are user load and duration.

Warning: I’m certainly an amateur when it comes to performance testing.  Please chime in with your corrections and suggestions.

Per one of my favorite podcasts, WNYC’s On the Media, journalists are finding it increasingly more difficult to check facts at a pace that keeps up with modern news coverage.  To be successful, they need dedicated fact checkers.   Seem familiar yet?

Journalists depend on these fact checkers to keep them out of trouble.  And fact checkers need to have their own skill sets, allowing them to focus on fact checking.  Fact checkers have to be creative and use various tricks, like only following trustworthy people on Twitter and speaking different languages to understand the broader picture.  How about now, seem familiar?

Okay, try this:  Craig Silverman, founder of Regret the Error, a media error reporting blog, said “typically people only notice fact checkers if some terrible mistake has been made”.  Now it seems familiar, right?

The audience of fact checkers or software testers has no idea how many errors were found before it was released.  They only know what wasn’t found. 

Sometimes I have a revenge fantasy that goes something like this:

If a user finds a bug and says, “that’s so obvious, why didn’t they catch this”, their software will immediately revert back to the untested version. 

…Maybe some tester love will start to flow then.

There’s no incentive to look the other way when we notice bugs at the last minute.

We are planning to release a HUGE feature to production tomorrow.  Ooops!  Wouldn’t you know it…we found more bugs.

Back in the dark ages, with Scrum, it’s possible we may have talked ourselves into justifying the release without the bug fixes; “these aren’t terrible…maybe users won’t notice…we can always patch production later”.

But with Kanban, it went something like this:

“…hey, let’s not release tomorrow.  Let’s give ourselves an extra day.”

  • Nobody has to work late.
  • No iteration planning needs to be rejiggered.
  • There’s no set, established maintenance window restricting our flexibility.
  • Quality did not fall victim to an iteration schedule.
  • We don’t need to publish any known bugs (i.e., there won’t be any).

I just came from an Escape Review Meeting.  Or as some like to call it, a “Blame Review Meeting”.  I can’t help but feel empathy for one of the testers who felt a bit…blamed.

With each production bug, we ask, “Could we do something to catch bugs of this nature?”.  The System 1 response is “no, way too difficult to expect a test to have caught it”.  But after 5 minutes of discussion, the System 2 response emerges, “yes, I can imagine a suite of tests thorough enough to have caught it, we should have tests for all that”.  Ouch, this can really start to weigh on the poor tester.

So what’s a tester to do?

  1. First, consider meekness.  As counterintuitive as it seems, I believe defending your test approach is not going to win respect.  IMO, there is always room for improvement.  People respect those who are open to criticism and new ideas.
  2. Second, entertain the advice but don’t promise the world.  Tell them about the Orange Juice Test (see below).

The Orange Juice Test is from Jerry Weinberg’s book, The Secrets of Consulting.  I’ll paraphrase it:

A client asked three different hotels to supply said client with 700 glasses of fresh squeezed orange juice tomorrow morning, served at the same time.  Hotel #1 said “there’s no way”.  Hotel #2 said “no problem”.  Hotel #3 said “we can do that, but here’s what it’s going to cost you”.  The client didn’t really want orange juice.  They picked Hotel #3.

If the team wants you to take on new test responsibilities or coverage areas, there is probably a cost.  What are you going to give up?  Speed?  Other test coverage?  Your kids?  Make the costs clear, let the team decide, and there should be no additional pain on your part.

Remember, you’re a tester, relax.

One of my tester colleagues and I had an engaging discussion the other day. 

If a test failure is not caused by a problem in the system-under-test, should the tester bother to say the test failed? 

My position is: No. 

If a test fails but there is no problem with system-under-test, it seems to me it’s a bad test.  Fix the test or ignore the results.  Explaining that a test failure is nothing to be concerned with, gives the project team a net gain of nothing.  (Note: If the failure has been published, my position changes; the failure should be explained).

The context of our discussion was the test automation space. I think test automaters, for some reason, feel compelled to announce automated check failures in one breath, and in the next, explain why these failures should not matter.  “Two automated checks failed…but it’s because the data was not as expected, so I’m not concerned” or “ten automated checks are still failing but it’s because something in the system-under-test changed and the automated checks broke…so I’m not concerned”. 

My guess is, project teams and stakeholders don’t care if tests passed or failed.  They care about what those passes and failures reveal about the system-under-test.  See the difference?

Did the investigation of the failed test reveal anything interesting about the system-under-test?  If so, share what it revealed.  The fact that the investigation was triggered by a bad test is not interesting.

If we’re not careful, Test Automation can warp our behavior. IMO, a good way of understanding how to behave in the test automation space, is to pretend your automated checks are sapient (AKA “manual”) tests.  If a sapient tester gets different results than they expected, but later realizes their expectations were wrong, they don’t bother to explain their recent revelation to the project team.  A sapient tester would not say, “I thought I found a problem, but then I realized I didn’t”?  Does that help anyone?

My system 1 thinking says “no”.  I’ve often heard separation of duties makes testers valuable.

Let’s explore this.

A programmer and a tester are both working on a feature requiring a complex data pull.  The tester knows SQL and the business data better than the programmer.

If Testers Write Source Code:

The tester writes the query and hands it to the programmer.  Two weeks later, as part of the “testing phase”, the tester tests the query (they wrote themselves) and finds 0 bugs.  Is anything dysfunctional about that? 

If Testers do NOT Write Source Code:

The programmer struggles but manages to cobble some SQL together.  In parallel, the tester writes their own SQL and puts it in an automated check.  During the “testing phase”, the tester compares the results of their SQL with that of the programmer’s and finds 10 bugs.  Is anything dysfunctional about that? 

After RST class (see my Four Day With Michael Bolton post), Bolton did a short critical thinking for testers workshop.  If you get an opportunity to attend one of these at a conference or other place, it’s time well spent.  The exercises were great, but I won’t blog about them because I don’t want to give them away.  Here is what I found in my notes…

  • There are two types of thinking:
    1. System 1 Thinking – You use it all the time to make quick answers.  It works fine as long as things are not complex.
    2. System 2 Thinking – This thinking is lazy, you have to wake it up.
  • If you want to be excellent at testing, you need to use System 2 Thinking.  Testing is not a straight forward technical problem because we are creating stuff that is largely invisible.
  • Don’t plan or execute tests until you obtain context about the test mission.
  • Leaping to assumptions carries risk.  Don’t build a network of assumptions.
  • Avoid assumptions when:
    • critical things depend on it
    • when the assumption is unlikely to be true
    • the assumption is dangerous when not declared
  • Huh?  Really?  So?   (James Bach’s critical thinking heuristic)
    • Huh? – Do I really understand?
    • Really? – How do I know what you say is true?
    • So? – Is that the only solution?
  • “Rule of Three” – If you haven't thought of at least three plausible explanations, you’re not thinking critically enough.
  • Verbal Heuristics: Words to help you think critically and/or dig up hidden assumptions.
  • Mary Had a Little Lamb Heuristic – emphasize each word in that phrase and see where it takes you.
  • Change “the” to “a” Heuristic:
    • “the killer bug” vs. “a killer bug”
    • “the deadline” vs. “a deadline”
  • “Unless” Heuristic:  I’m done testing unless…you have other ideas
  • “Except” Heuristic:  Every test must have expected results except those we have no idea what to expect from.
  • “So Far” Heuristic:  I’m not aware of any problems…so far
  • “Yet” Heuristic: Repeatable tests are fundamentally more valuable, yet they never seem to find bugs.
  • “Compared to what?” Heuristic: Repeatable tests are fundamentally more valuable…compared to what?
  • A tester’s job is to preserve uncertainty when everyone around us is certain.
  • “Safety Language” is a precise way of speaking which differentiates between observation and inference.  Safety Language is a strong trigger for critical thinking.
    • “You may be right” is a great way to end an argument.
    • “It seems to me” is a great way to begin an observation.
    • Instead of “you should do this” try “you may want to do this”.
    • Instead of “it works” try “it meets the requirements to some degree”
    • All the verbal heuristics above can help us speak precisely.

See Part 1 for intro.

  • People don’t make decisions based on numbers, they do so based on feelings (about numbers).
  • Asking for ROI numbers for test automation or social media infrastructure does not make sense because those are not investments, those are expenses.  Value from an automation tool is not quantifiable.  It does not replace a test a human can perform.  It is not even a test.  It is a “check”.
  • Many people say they want a “metric” when what they really want is a “measurement”.  A “metric” allows you to stick a number on an observation.  A “measurement”, per Jerry Weinberg, is anything that allows us to make observations we can rely on.  A measurement is about evaluating the difference between what we have and what we think we have.
  • If someone asks for a metric, you may want to ask them what type of information they want to know (instead of providing them with a metric).
  • When something is presented as a “problem for testing”, try reframing it to “a problem testing can solve”.
  • Requirements are not a thing.  Requirements are not the same as a requirements document.  Requirements are an abstract construct.  It is okay to say the requirements document is in conflict with the requirements.  Don’t ever say “the requirements are incomplete”.  Requirements are not something that can be incomplete.  Requirements are complete before you even know they exist, before anyone attempts to write a requirements document.
  • Skilled testers can accelerate development by revealing requirements.  Who cares what the requirement document says.
  • When testing, don’t get hung up on “completeness”.  Settle for adequate.  Same for requirement documents.  Example: Does your employee manual say “wear pants to work”?  Do you know how to get to your kid’s school without knowing the address?
  • Session-Based Test Management (SBTM) emphasizes conversation over documentation.  It’s better to know where your kid’s school is than to know the address.
  • SBTM requires 4 things:
    • Charter
    • Time-boxed test session
    • Reviewable results
    • Debrief
  • The purpose of a program is to provide value to people.  Maybe testing is more than checking.
  • Quality is more than the absence of bugs.
  • Don’t tell testers to “make sure it works”.  Tell them to “find out where it won’t work.”  (yikes, that does rub against the grain with my We Test To Find Out If Software *Can* Work post, but I still believe each)
  • Maybe when something goes wrong in production, it’s not the beginning of a crisis, it’s the end of an illusion.

image

After failing in production for a third time, the team lead’s passive aggressive tendencies became apparent in his bug report title.  Can you blame him?

It all depends on context, of course.  But if three attempts to get something working in production still fail…there may be a larger problem somewhere. 

That got me thinking.  Maybe we should add passive aggressive suffixes for all our  “escapes” (bugs not caught in test).  It would serve to embarrass and remind ourselves that we can do better.

  • “…fail I” would not be so bad.
  • “…fail II” would be embarrassing.
  • “…fail III” should make us ask for help testing and coding.
  • “…fail IV” should make us ask to be transferred to a more suitable project.
  • by “…fail V” we should be taking our users out to lunch.
  • “…fail VI” I’ve always wanted to be a marine biologist, no time like the present.

See Part 1 for intro.

  • There are two reasons why your bugs are not getting fixed:
    1. There is more important stuff going on
    2. You are not explaining them well.
  • Testers need to be good at articulating why we think something is a bug.  One approach is PEW.  State the Problem, provide an Example, explain Why it matters.
  • “How many test cases?” is usually a silly question.
  • There are two reasons why all tests do not get executed: 
    1. The tester didn’t think of it.
    2. The tester thought of it but decided not to execute it.  Hopefully it’s the latter.  It may be worth while to brainstorm on tests.
  • One way to communicate coverage to stakeholders is to use a mind map.
  • If you get bored testing, you may be doing something wrong (e.g., you are doing repetitive tests, you are not finding anything interesting).
  • Testing is about looking for a “problem”.  A “problem” is an undesirable situation that is solvable.
  • (I need to stop being so militant about this) All bugs don’t need repro steps.  Repro steps may be expensive.
  • Consider referencing your oracle (the way of recognizing a problem you used to find the bug) in your bug report.
  • When asked to perform significantly time consuming or complex testing, consider the Orange Juice Test:  A client asked three different hotels if the hotels could supply said client with two thousand glasses of fresh squeezed orange juice tomorrow morning.  Hotel #1 said “no”.  Hotel #2 said “yes”.  Hotel #3 said “yes, but here’s what it’s going to cost you”.  The client didn’t really want orange juice.  They picked Hotel #3.
  • No test can tell us about the future.
  • Nobody really knows what 100% test coverage means.  Therefore, it may not make sense to describe test coverage as a percentage.  Instead, try explaining it as the extent to which we have travelled over some agreed upon map.  And don’t talk about coverage unless you talk about the kind of coverage you are talking about (e.g., Functions, Platforms, Data, Time, etc.)
  • Asking how long a testing phase should be is like asking how long I have to look out the windshield as I drive to Seattle.
  • Skilled testers are like crime scene investigators.  Testers are not in control (the police are).  Testers give the police the information they need.  If there is another crime committed, you may not have time to investigate as much with the current crime scene.
  • No test can prove a theory is correct.  A test can only disprove it.
  • (I still have a hard time with this one) Exploratory Testing (ET) is not an activity that one can do. It is not a technique.  It is an approach.  A test is exploratory if the ideas are coming from the tester in the here and now.  ET can be automated.  Scripts come from exploration.
    • Exploratory behavior = Value seeking.
    • Scripted behavior = Task seeking
  • Tests should not be concerned with the repeatability of computers.  It’s important to induce variation.
  • ET is a structured approach.  One of the most important structures is the testing story.  A skilled tester should be able to tell three stories:
    1. A story about the product (e.g., is the product any good?).
    2. A story about how you tested it (e.g., how do I know?  Because I tested it by doing this…).
    3. A story about the value of the testing (e.g., here is why you should be pleased with my work…).

Rapid Software Testing (RST) is for situations where you have to test a product right now, under conditions of uncertainty, in a way that stands up to scrutiny.

I don’t want to walk you through the exercises, videos, and discussions Michael Bolton used in class because…well, it’s his class, and you should take it!  But I will share some bite-sized test wisdom I wrote in my notebook during class.

Be careful, most of these are heuristic…

  • An assumption is the opposite of a test.  (I love that!)
  • Our job is not to be “done” with something.  It’s to find out interesting things that others do not already know.
  • A tester’s job is to see the complexity behind the seemingly simple and to see the simplicity behind the seemingly complex.
  • “Test” is a verb.  Not a noun.  It’s not an artifact.  It’s something one does.
  • Testers do not put the quality in, they help others put the quality in.  Testers do not assure quality, but if we must use the term “QA”, maybe it should stand for Quality Assistance.
  • Testers are like editors for writers.  No matter how well a writer tests their work, a good editor can normally find mistakes.
  • Programmers do lots of testing.  But they need help finding the last few problems.
  • A tester’s job is to remain uncertain, to maintain the idea that it might not work.
  • providing a “QA Certification” is like your manager making you say “I will take the blame…”
  • Testers don’t matter because the program is not intended for them.
  • Discussions about quality are always political or emotional. – Jerry Weinberg
  • An “Issue” is anything that threatens the value of our testing.  Issues should be reported.  They may be more important than bugs because they give bugs time to hide.
  • Threats to testability are “issues”.  Two things that threaten testability are:
    • Visibility (e.g., log files)
    • Controllability – the capacity to make the program do stuff (e.g., can I update the DB and config files?)
  • “Positive Test” – Fulfills every required assumption.  Entering a bad password is a positive test if we’ve already established how the bad password should be handled.
  • What is testing?  Getting answers.
  • A useful test approach:
    • Know your mission
    • Consider building a model of the product first
    • Begin sympathetically
    • Then chase risks
  • The first few moments of a product should be based on learning.
  • There’s always more than meets the eye.
  • Maps and models that you build don’t have to be right.  They just need to get people thinking.
  • If you don’t have enough time to test, one trick to get more time is to find important bugs.  People will generally delay releases.  (But don’t sit on bugs until the last minute, of course.  Report them as soon as you’re aware.)
  • Don’t forget to “imploy the pause”.  Take the time to learn something new every now and then.

Yes, Michael Bolton is one of my biggest mentors.  And you’ve read a lot of fanboy posts on this blog.  But before I start spewing stuff from my RST notes, I want to post a disagreement I had with Michael Bolton (and RST).  After a 15 minute discussion, he weakened my position.  But I still disagree with this statement:

We don’t test to find out if something works.  We test to find out if it doesn’t work.

Here is a reason I disagree:  Knowing at least one way software can work, may be more valuable than knowing a thousand ways it can NOT work.

Example: Your product needs to help users cross a river.  Which is more valuable to your users? 

  • “hey users, if you step on these exact rocks, you have a good chance of  successfully crossing the river”
  • “hey users, here are a bunch of ways you can NOT cross the river: jump across, swim past the alligators, use the old rickety bridge, swing across on a vine, drain the river, dig a tunnel under it, etc.”

Users only need it to work one way.  And if it solves a big enough problem, IMO, those users will walk across the rocks.

Sure, finding the problems is important too.  Really important!  But if someone puts a gun to my head, and says I only get one test.  It’s going to be a happy path test. 

Bolton referred us to the following discussion between James Bach and Michael Kelly (http://michaeldkelly.com/media/  then click on “Is there a problem here?”).  I thought it would change my mind, as most James Bach lessons do.  It hasn’t…yet. 

I might be wrong.

I finally pulled it off!  My company brought Michael Bolton to teach a private 3-day Rapid Software Testing course and stick around for a 4th day of workshops and consulting.  On the fourth day I had Michael meet with QA Managers to give his talk/discussion on “How to Get The Best Value From Testing”.  Then he gave a talk for programmers, BAs, testers, and managers on “The Metrics Minefield”.  Finally, he did a 2.5 hour workshop on “Critical Thinking for Testers”.

photo1

My brain and pen were going the whole four days; every other sentence he uttered held some bit of testing wisdom.  I’ll post chunks of it in the near future.  I attended the class 6 years earlier in Toronto and I was concerned it would have the same material but fortunately most of it had changed.

The conversations before/after class were a real treat too.  After the first day, Claire Moss, Alex Kell, Michael Bolton, and I met at Fado for some Guinness, tester craic, and much to my surprise, to listen to Michael play mandolin in an Irish tradition music session.  He happened to be a very good musician and (of course) gave us handles to tell a slip jig from a reel.

photo

Several days later, I’m still haunted by Michael-Bolton-speak.  I keep starting all my sentences with “it seems to me”.  But best of all perhaps, is the lingering inspiration to read, challenge, and contribute thoughtful ideas to our testing craft.  He got me charged up enough to love testing for at least another 6 years.  Thanks, Michael!

“I’m just the tester, if it doesn’t run it’s not my problem, it’s the deployment team’s problem. I can tell you how well it will work, but first you’ve got to deploy it properly.”

One of the most difficult problems to prevent is a configuration problem; a setting that is specific to production.  You can attempt perfect testing in a non-production environment, but as soon as your Config Management guys roll it out to prod with the prod config settings, the best you can do is cross your fingers (unless you’re able to test in prod).

After a recent prod server migration, my config management guys got stuck scrambling around trying to fix various prod config problems.  We had all tested the deployment scripts in multiple non-prod environments.  But it still didn’t prepare us for the real thing.

It’s too late for testers to help now.

I’ve been asking myself what I could have done differently.  The answer seems to be, asking/executing more hypothetical questions/tests, like:

  • If this scheduled nightly task fails to execute, how will we know?
  • If this scheduled nightly task fails to execute, how will we recover?

But often I skip the above because I’m so focused on:

  • When this scheduled nightly task executes, does it do what it’s supposed to do?

The hypotheticals are difficult to spend time on because we, as testers, feel like we’re not getting credit for them.  We can’t prevent the team from having deployment problems.  But maybe we can ask enough questions to prepare them for the bad ones.

I figured it was time for a review of some modern testing terms.  Feel free to challenge me if you don’t like my definitions, which are very conversational.  I selected terms I find valuable and stayed away from terms I’m bored with (e.g., “Stress Testing”, “Smoke Testing”). 

Afterwards, you can tell me what I’m missing.  Maybe I’ll update the list.  Here we go…

Tester – Never refer to yourself as QA.  That’s old school.  That’s a sign of an unskilled tester.  By now, we know writing software is different than manufacturing cars.  We know we don’t have the power to “assure” quality.  If your title still has “QA” in it, convince your HR department to change it.  Read this for more.

Sapient Tester – A brain-engaged tester.  It is generally used to describe a skilled tester who focuses on human “testing” but uses machines for “checking”.  See James Bach’s post.

Manual Tester – A brain-dead tester.  Manual testers focus on “checking”.
Test (noun) – Something that can reveal new information.  Something that takes place in one’s brain.  Tests focus on exploration and learning.  See Michael Bolton’s post.

Check – An observation, linked to a decision rule, resulting in a bit (e.g., Pass/Fail, True/False, Yes/No).  Checks focus on confirmation.  A check may be performed by a machine or a human.  Repetition of the same check is best left to a machine, lest the tester becomes a “Manual Tester”, which is not cool.  See Michael Bolton’s posts, start here.

Developer – It takes a tester, business analyst, and programmer to develop software; even if they’re just different hats on the same person.  That means if you’re a tester, you’re also a developer.

Programmer – Person on the development team responsible for writing the product code.  They write code that ships.

Prog – Short version of “Programmer”.  See my post.

Test Automation Engineer – This is a Tester who specializes in writing automated checks.  This is the best I have so far.  But here are the problems I have with it.  Test Automation Engineers are also programmers who write code.  That means the term “Programmer” is ambiguous.  A Test Automation Engineer has the word “Test” in their title when, arguably, a test can’t be automated.

Heuristic - a fallible method for solving a problem or making a decision.  Like a rule of thumb.  It's fallible though, so use it with care. Why is this term in a tester dictionary?  Skilled testers use heuristics to make quick decisions during testing.  For example: a tester may use a stopping heuristic to know when to stop a test or which test to execute next.  Testers have begun capturing the way they solve problems and creating catchy labels for new heuristics.  Said labels allow testers to share ideas with other testers.  Example: the 'Just In Time Heuristic' reminds us to add test detail as late as possible, because things will change.  Example: the' Jenga Heuristic' reminds us that if we remove too many dependencies from a test, it will easily fall down...instead, try removing one dependency at a time to determine the breaking point.

Test Report – Something a team member or manager may ask a tester for.  The team member is asking for a summary of a tester’s findings thus far.  Skilled testers will have a mnemonic like MCOASTER or MORE BATS to enable a quick and thorough response.

Context Driven Testing – an approach to software testing that values context. Example: when joining a new project, Context Driven testers will ask the team what level of documentation is required, as opposed to just writing a test plan because that is what they have always done.  IMO, Context Driven testers are the innovators when it comes to software testing.  They are the folks challenging us to think differently and adjust our approaches as the IT industry changes.  See Context Driven Testing.

Bug – Something that bugs someone who matters.

Issue – It may result in a bug.  We don’t have enough information to determine that yet.

Escape – A bug found in production.  A bug that has “escaped” the test environment.  Counting “escapes” may be more valuable than counting “bugs”.

Follow-on Bug – A bug resulting from a different bug.  “we don’t need to log a bug report for BugA because it will go away when BugB gets fixed”.  I first heard it used by Michael Hunter (I think).

Safety Language – Skilled testers use it to tell an honest accurate story of their testing and preserve uncertainty.  Example: “This appears to meet the requirements to some degree”, “I may be wrong”.  See my post.

Test Idea – less than 140 characters.  Exact steps are not necessary.  The essence of a test should be captured.  Each test ideas should be unique among their set.  The purpose is to plan a test session without spending too much time on details that may change.  Test Ideas replace test cases on my team.

Test Case Fragment – see “Test Idea”.  I think they are the same thing.

AUT – Application Under Test.  The software testers are paid to test.  See my post and read the comments to see why I like AUT better than competing terms.

Showstopper – An annoying label, usually used to define the priority of bugs.  It is typically overused and results in making everything equally important.  See my post.

Velocity, Magnitude, Story Points – Misunderstood measurements of work on agile development teams.  Misunderstood because Agile consultants do such a poor job of explaining them.  So just use these terms however you want and you will be no worse off than most Agile teams.

Session-Based-Test-Management (SBTM) – A structured approach to Exploratory Testing that helps testers be more accountable.  It involves dividing up test work into time-based charters (i.e., missions), documenting your test session live, and reviewing your findings with a team member.  The Bach brothers came up with this, I think.  Best free SBTM tool, IMO, is Rapid Reporter.

Come on testers, let’s make up our minds and all agree on one term to refer to the software we are testing.  The variety in use is ridiculous.

I’ve heard the following used by industry experts:

  • PUT (Product Under Test)
  • SUT (System Under Test)
  • AUT (Application Under Test)
  • Product, Software, Application, etc.

Today I declare “SUT” the best term for this purpose! 

Here’s my reasoning: “PUT” could be mistaken for a word, not an acronym.  “AUT” can’t easily be pronounced aloud.  “SUT” could be translated to “Software Under Test” or “System Under Test”, but each honor the intent. The software we are paid to test is a “Product”…but so is Quick Test Pro, Visual Studio, and SQL Server.

“What’s the big deal with this term?” you ask.  Without said term, we speak ambiguously to our team members because we operate and find bugs in all classes of software:

  • the software we are paid to test
  • the software we write to test the software we are paid to test (automation)
  • the software we write our automation with (e.g., Selenium, Ruby)
  • the software we launch the software we are paid to test from (e.g., Window7, iOS)

If we agree to be specific.  Let’s also agree to use the same term.  Please join me and start using “SUT”.

When bugs escape to production, does your team adjust?

We started using the following model on one of my projects.  It appears to work fairly well.  Every 60 days we meet and review the list of “escapes” (i.e., bugs found in production).  For each escape, we ask the following questions:

  1. Could we do something to catch bugs of this nature?
  2. Is it worth the extra effort?
  3. If so, who will be responsible for said effort?

The answer to #1 is typically “yes”. Creative people are good at imagining ultimate testing. It’s especially easy when you already know the bug.  There are some exceptions though. Some escapes can only be caught in production (e.g., a portion of our project is developed in production and has no test environment).

The answer to #2 is split between “yes” and “no”.  We may say “yes” if the bug has escaped more than once, significantly impacts users, or when the extra effort is manageable.  We may say “no” when a mechanism is in place to alert our team of the prod error; we can patch some of these escapes before they affect users, with less effort than required to catch them in non-prod environments.

The answer to #3 falls to Testers, Programmers, BAs, and sometimes both or all.

So…when bugs escape to production, does my team adjust?  Sometimes.

We had a seemingly easy feature to test: users should be able to rearrange columns on a grid.  My test approach was to just start rearranging columns at random

My colleague’s test approach was different.  She gave herself a nonsensical user scenario to complete.  Her scenario was to rearrange all the columns to appear in alphabetical order (by column header label) from left to right.   Pretty stupid, I thought to myself.  Will users ever do that? No.  And it seems like a repetitive waste of time.

Since I had flat-lined with my own approach, I tried her nonsensical user scenario myself…figured I’d see how stupid it was.  As I progressed through the completion of the nonsensical user scenario, it started opening test case doors:

  • I’m getting good at this rearranging column thing, maybe I can go faster…wait a minute, what just happened?
  • I’ve done this step so many times, maybe I can pay more attention to other attributes like the mouse cursor…oh, that’s interesting.
  • There’s no confusion about what order I’ve placed the columns in, now I can easily check that they remained in that order.
  • I’m done with letter “E”.  I think I saw a column starting with a letter “F” off the screen on the far right.  I’m going to have to use the horizontal scroll bar to get over there.  What happens when I drag my “F” column from the right to the left and then off the screen?

Now I get it!  The value in her nonsensical user scenario was to discover test cases she may not have otherwise discovered.  And she did.  She found problems placing a column halfway between the left-most and right-most columns.

A nonsensical user scenario gives us a task to go perform on the system under test.  Having this task may open more doors than mere random testing.

…is the checks, stupid.

It doesn’t matter what framework you are using, what language you write them in, or how many you have.  What matters is how effectively your automated checks help determine ship decisions.

  1. What should your automated checks observe?
  2. What decision rules should your automated checks use to determine pass/fail? 

Those are the two hardest questions to answer.  You can’t Google them.  You can’t ask your testing mentors.  You’re tempted to hide your decisions when they’re poorly conceived (because you know few will ask).  You’re tempted to focus on what you know people will ask; the questions with the shortest answers:  How many automated checks?  Did they pass?

But, what are they checking?  You know that’s what matters.  Start building your automated check suite there.  The rest can follow.

In reference to my When Do We Need Detailed Test Cases? post, Roshni Prince asked:

“when we run multiple tests in our head… [without using detailed test cases] …how can we be really sure that we tested everything on the product by the end of the test cycle?”

Nice question, Roshni.  I have two answers.  The first takes your question literally.

  • …We can’t.  We’ll never test everything by the end of the test cycle.  Heck, we’ll never test everything in an open-ended test cycle.  But who cares?  That’s not our goal.
  • Now I’ll answer what I think you are really asking, which is “without detailed test cases, how can we be sure of our test coverage?”.  We can’t be sure, but IMO, we can get close enough using one or more of the following approaches:
    1. Write “test ideas” (AKA test case fragments).  These should be less than the size of a Tweet.  These are faster than detailed test cases to write/read/execute and more flexible.
    2. Use Code Coverage software to visually analyze test coverage.
    3. Build a test matrix using Excel or another table.
    4. Use a mind map to write test ideas.  Attach it to your specs for an artifact.
    5. Use a Session Based Test Management tool like Rapid Reporter to record test notes as you test.
    6. Use a natural method of documenting test coverage.  By “natural” we mean, something that will not add extra administrative work.  Regulatory compliance expert and tester, Griffin Jones, has used audio and/or video recordings of test sessions to pass rigorous audits.  He burns these to DVD and has rock solid coverage information without the need for detailed test cases.  Another approach is to use keystroke capture software.
    7. Finally, my favorite when circumstances allow; just remember!  That’s right, just use your brain to remember what you tested.  Brains rock!  Brains are so underrated by our profession.  This approach may help you shine when people are more interested in getting test results quickly and you only need to answer questions about what you tested in the immediate future…like today!  IMO, the more you enjoy your work as a tester, the more you practice testing, the more you describe your tests to others, the better you’ll recall test coverage from your brain.  And brains record way more than any detailed test cases could ever hope to.

In my Don’t Give Test Cases To N00bs post I tried to make the argument against writing test cases as a means to coaching new testers.  At the risk of sounding like a test case hater, I would like to suggest three contexts that may benefit from detailed test cases.

These contexts do not include the case of a mandate (e.g., the stakeholder requires detailed test cases and you have no choice).

  1. Automated Check Design: Whether a sapient tester is designing an automated check for an automation engineer or an automation engineer is designing the automated check herself, detailed test cases may be a good idea.  Writing detailed test cases will force tough decisions to be made prior to coding the check.  Decisions like: How will I know if this check passes?  How will I ensure this check’s dependent data exists?  What state can I expect the product-under-test to be in before the check’s first action?
  2. Complex Business Process Flows:  If your product-under-test supports multiple ways of accomplishing each step in its business process flows, you may want to spec out each test to keep track of test coverage.  Example: Your product’s process to buy a new widget requires 3 steps.  Each of the 3 steps has 10 options.  Test1 may be: perform Step1 with Option4, perform Step2 with Option1, then perform Step3 with Option10.
  3. Bug Report Repro Steps: Give those programmers the exact foot prints to follow else they’ll reply, “works on my box”.

Those are the three contexts I write detailed test cases for.  What about you?

In response to my What I Love About Kanban As A Tester #1 post, Anonymous stated:

“The whole purpose of documenting test cases…[is]…to be able to run [them] by testers who don’t have required knowledge of the functionality.”

Yeah, that’s what most of my prior test managers told me, too…

“if a new tester has to take over your testing responsibilities, they’ll need test cases”

I wouldn’t be surprised if a secret QA manager handbook went out to all QA managers, stating the above as the paramount purpose of test cases.  It was only recently that I came to understand how wrong all those managers were.

Before I go on, let me clarify what I mean by “test cases”.  When I say “test cases”, I’m talking about something with steps, like this:

  1. Drag ItemA from the catalog screen to the new order screen.
  2. Change the item quantity to “3” on the new order screen.
  3. Click the “Submit Order” button.

Here’s where I go on: 

  • When test cases sit around, they get stale.  Everything changes…except your test cases.  Giving these to n00bs is likely to result in false fails (and maybe even rejected bug reports).
  • When test cases are blindly followed, we miss the house burning down right next to the house that just passed our inspection.
  • When test cases are followed, we are only doing confirmatory testing.  Even negative (AKA “unhappy”) paths are confirmatory testing.  If that’s all we can do, we are one step closer to shutting down our careers as testers.
  • Testing is waaaay more than following steps.  To channel Bolton, a test is something that goes on in your brain.  Testing is more than answering the question, “pass or fail?”.  Testing is sometimes answering the question, “Is there a problem here?”. 
  • If our project mandates that testers follow test cases, for Pete’s sake, let the n00b’s write their own test cases.  It may force them to learn the domain.
  • Along with test cases comes administrative work.  Perhaps time is better spent testing.
  • If the goal is valuable testing from the n00b, wouldn’t that best be achieved by the lead tester coaching the n00b? And if that lead tester didn’t have to write test cases for a hypothetical n00b, wouldn’t that lead tester have more time to coach the hypothetical n00b, should she appear.  Here’s a secret: she never will appear.  You will have a stack of test cases that nobody cares about; not even your manager.

In my next post I’ll tell you when test cases might be a good idea.

...regression testing is optional.  What?  The horror!!!

Back in the dark ages, with Scrum, we were spending about 4 days of each iteration regression testing.  This product has lots of business logic in the UI, lots of drag-and-drop-type functionality, and very dynamic data, so it has never been a good candidate for automation.  Our regression test approach was to throw a bunch of humans at it (see my Group Regression Testing and Chocolate post).  With Scrum, each prod deployment was a full build, including about 14 modules, because lots of stuff got touched.  Thus, we always did a full regression test, touching all the modules.  Even after an exhaustive regression test, we generally had one or two “escapes” (i.e., bugs that escaped into production).

Now, ask me how often regression tests failed?  …not very often.  And, IMO, that is a lot of waste.

With Kanban, each prod release only touches one small part of the product.  So we are no longer doing full builds.  Think of it like doing a production patch.  We’ve gotten away from full regression tests because, with each release, we are only changing one tiny part of the product.  It is far less risky.  Why test the hell out of bits that didn’t change? 

So now we regression test based on one risk: the feature going to prod.  Sometimes it means an hour of regression tests.  Sometimes it means no regression tests.  So far, it’s a net loss of time spent on regression tests.  And this is a good thing.

We switched to Kanban in February.  So far, not a single escape has made it to prod (yes, I’m knocking on wood).

This success may just be a coincidence.  Or…maybe it’s easier for teams to prevent escaped bugs when those teams can focus on one Feature at a time.   Hmmmmmm…

For those of you writing automated checks and giving scrum reports, status reports, test reports, or some other form of communication to your team, please watch your language…and I'm not talking about swearing.

You may not want to say, “I found a bunch of issues”, because sometimes when you say that, what you really mean is, “I found a bunch of issues in my automated check code” or “I found a bunch of issues in our product code”.  Please be specific.  There is a big difference and we may be assuming the wrong thing.

If you often do checking by writing automated checks, you may not want to say, “I’m working on FeatureA”, because what you really mean is “I’m writing the automated checks for FeatureA and I haven't executed them or learned anything about how FeatureA works yet” or “I’m testing FeatureA with the help of automated checks and so far I have discovered the following…”

The goal of writing automated checks is to interrogate the system under test (SUT), right?  The goal is not just to have a bunch of automated checks.  See the difference?

Although your team may be interested in your progress creating the automated checks, they are probably more interested in what the automated checks have helped you discover about the SUT.

It’s the testing, stupid.  That’s why we hired you instead of another programmer.

...no testing deadlines…the freedom to test as long as I want.

Back in the dark ages, with Scrum, all the sprint Features had to be tested by the end of the iteration.  Since programming generally continued until that last minute (we couldn’t help ourselves), testers were sometimes forced to cut corners.  Even in cases where the whole team (e.g., programmers, BAs) jumped in to help test, there was still a tendency to skimp on testing that would otherwise be performed.  The team wants to be successful.  Success is more easily measured by delivered Features than Feature quality.  That’s the downside of deadlines.

With Kanban, there are no deadlines…you heard me!  Testers take as long as they need.  If the estimates are way off, it doesn’t leave gaps or crunches in iterations.  There are no iterations!  Warning: I fear unskilled testers may actually have a difficult time with this freedom.  Most testers are used to being told how much time they have (i.e., “The Time’s Up! Heuristic”).  So with Kanban, other Stopping Heuristics may become more important.

Jon Bach walked up to the podium and (referring to his readiness as the presenter) asked us how to tell the difference between a tester and a programmer:  A programmer would say, “I’m not ready for you guys yet”.

STPCon Spring 2012 kicked off with the best keynote I have seen yet.  Jon took on the recent Test-Is-Dead movement using a Journalism-Is-Dead metaphor.

He opened with the observation, “Did anyone get a ‘USA Today’ delivered to their room this morning?”

“No”. (something as a tester I was embarrassed not to have noticed.)

And after a safety language exercise, Jon presented a fresh testing definition, which reflects his previous career, journalism:

Testing is an interrogation and investigation in pursuit of information to aid evaluation.

Jon wondered out loud what had motivated the Test-Is-Dead folks.  “Maybe there is a lot of bad testing in our midst.”  And he proceeded to examine about 7 threats (I think there were more) that he believed could actually make testing dead.  Each testing threat was reinforced with its metaphorical journalism threat and coupled with a quote from the Test-Is-Dead folks. 

(I listed each threat-to-testing in bold text, followed by its journalism threat.  I listed an example Test-Is-Dead quote for threat #3 below.) 

  1. (threat to testing) If the value of testing become irrelevant – (threat to journalism) If we stop caring about hearing the news of what is happening in the world. (implied: then testing and journalism is dead)
  2. If the quality of testing is so poor that it suffers an irreversible “reputation collapse event”.  If “journalist” comes to mean “anybody who writes” (e.g., blogs, tweets, etc.).
  3. If all users become early adaptors with excellent technical abilities.  If everyone becomes omnipotent; they already know today’s weather and tomorrow’s economic news.

    For this threat, the Test-Is-Dead quote was from James Whittaker, “You are a tester pretending to be a user”.  The context of Whittaker’s statement was that testers may not be as important because they are only pretending to be users, while modern technology may allow actual users to perform the testing.  Bach’s counterpoint was: since not all users may want to be testers and not all users may possess the skills to test, there may still be value for a tester role.
  4. If testers are forced to channel all thoughts and intelligence through a limited set of tools and forced to only test what can be written as “executable specifications”.  If journalists could only report what the state allows.  Jon listed the example of the Egyptian news anchor that just resigned from state media after 20 years, due to what she called “lack of ethical standards” in the media’s coverage of the Arab Spring.
  5. If all the tests testers could think of were confirmatory.  If all the journalists did not dig deeper (e.g., If they always assumed the story was just a car crash.)
  6. If software stops changing and there is no need to ask new questions.  If the decisions people made today no longer depend on the state of the economy, weather, who they want to elect, etc.
  7. If the craft of testing is made to be uninviting; into a boring clerical activity that smart, talented, motivated, creative people are not interested in.  If you had to file a “news release approval” form or go through the news czar for all the news stories you told.

Jon’s talk had some other highlights for me:

  • He shared a list of tests he performed on eBay’s site prior to his eBay interview (e.g., can I find the most expensive item for sale?).  Apparently, he reported his test results during the interview.  This is an awesome idea.  If anyone did that to me, I would surely hire them.
  • He also showed a list of keynote presentation requirements he received from STPCon.  He explained how these requirements (e.g., try to use humor in your presentation) were like tests.  Then he used the same metaphor to contrast those “tests” with “checks”; am I in the right room?  Is the microphone on?  Do I have a glass of water?

Jon concluded where he started.  He revealed that although newspapers may be dead, journalism is not.  Those journalists are just reporting the news differently.  And maybe it’s time to cut those unskilled testers loose as well.  But, according to Jon, the testing need for exploration and sapience in a rapid development world is more important than ever.

Hey conference haters.  Maybe it’s you…

I just got back from another awesome testing conference, Spring STPCon 2012 in New Orleans.  Apparently not all attendees shared my positive experience.  Between track sessions I heard the usual gripes:

 

“It’s not technical enough!”

“I expected the presenter to teach me how to install a tool and start writing tests with it.”

“It was just another Agile hippy love fest.”

“He just came up with fancy words to describe something I already do.”

 

I used to whine with the best of them.  Used to.  But now I have a blast and return full of ideas and inspiration.  Here are my suggestions on how to attend a testing conference and get the most out of it:

  • Look for ideas, not instructions.  Adjust your expectations.  You are not going to learn how to script in Ruby.  That is something you can learn on your own.  Instead, you are going to learn how one tester used Ruby to write automated and manual API-layer REST service checks.
  • Follow the presenters.  Long before the conference, select the track sessions you are interested in.  Find each presenter’s testing blog and/or Twitter name and follow them for several weeks.  Compare them and discard the duds.
  • Talk to the presenters.  At the conference, use your test observation skills to identify presenters.  Introduce yourself and ask questions related to your project back at the office.  If you did my second bulleted suggestion above, you now have an ice-breaker, “Hey, I read your blog post about crowd source testing, I’m not sure I agree…”.
  • Attend the non-track-session stuff too.  I think track sessions are the least interesting part of conferences.  The most interesting, entertaining, and easily digestible parts are the Lightning Talks, Speed Geeking, Breakfast Bytes, meal discussion tables, tester games, and keynotes.  Don’t miss these.
  • Take notes.  Take waaaaaay more notes than you think you need.  I bring a little book and write non-stop during presentations.  It keeps me awake and engaged.  I can flip through said book on the plane, even when forced to turn off all personal electronics.
  • Log Ideas.  Sometimes ideas are directly given during presentations.  But mostly, they come to you while applying information from presentations to your own situation.  I write the word “IDEA” in my book, followed by the idea.  Sometime these ideas have nothing to do with the presentation context.
  • Don’t flee the scene.  When the conference ends each day, stick around.  You’ll generally find the big thinkers, still talking about testing in an informal hallway discussion.  I am uncomfortable in social situations and always feel awkward/intimidated by these folks but they are generally thrilled to bend your ear.
  • Mix and mingle.  Again, I find parties and social situations extremely scary.  Despite that fear, I almost always make it a point to eat my conference meal with a group of people I’ve never seen before.  It always starts awkward but it ends with some new friends, business cards, and the realization that other testers are just as unsophisticated as I am.
  • Submit a presentation.  If you hated one or more track sessions, channel that hate into your own presentation.  Take all the things you hated and do the opposite.  I did.  I got sick of always seeing consultants, vendors, and people who work for big fancy software companies.  So I pitched the opposite.  The real trick here is if you get accepted, the conference is free.  Let’s see your boss turn that one down.
  • Play tester games or challenges.  If James Bach, Michael Bolton, or any of the other popular context-driven approach testers are attending the conference, tell them you are interested in playing tester games.  They are usually happy to coach you on testing skills in a fun way.  It may be a refreshing break from track sessions.
  • Write a thank you card to your boss.  Don’t send an email.  Send something distinctive.  Let them know how much you appreciate their training money.  Tell them a few things you learned.  Tell them about my next bullet.
  • Share something with your team.  The prospect of sharing your conference takeaways with your team will keep you motivated to learn during the conference and help you put those ideas to use.

What do you do to get the most out of your testing conference experiences?

A tester made an interesting observation yesterday; all the testing job positions she came across required test automation experience.

She then stated, all the companies she has worked for have attempted to use test automation but have failed.  And even though she was involved in said test automation, she has yet to accumulate a test automation success story, the kind she might tell in a job interview.…unless she lies, of course (which I suspect is common).

This paradox may not exist in software companies (i.e., companies whose main product is software) because they probably throw enough money at test automation to make it successful.  But those of us working in the IT basements, on the shoe string budgets, of non-software companies, find the test automation experience paradox all too real.

...a feeling of accomplishment, directly related to my work ethic.

Back in the dark ages, with Scrum, the fruits of my testing were only given to my users once a month. It was awkward to stop testing FeatureA and start testing FeatureB because I felt no sense of closure with FeatureA. There was always a feeling that if I thought of a new FeatureA test, I could cram it in. It was a very non-committal feeling. Often, the feeling was, “I’ll finish these tests later”. And as the end of the iteration approached, it became, “wow, where did the time go?”.

With Kanban, when I complete FeatureA’s testing, it goes straight to the users. I feel a sense of accomplishment. The production deployment is the reward for my hard work…the closure…full commitment. I feel I am in complete control over how quickly FeatureA moves through development. The harder I work at it and the better I test, the more I focus, the quicker it goes out. I’m motivated to “get ‘er done”, as they say here in the south. But I also have the freedom to do more testing, if we need it.

…doing my tests in a focused chunk, then never again!

Back in the dark ages, with scrum, we would do the bulk of our testing in a development environment.  At some point near the end of the iteration, we would wrap everything up in a build, deploy it to a QA environment, and test some of the items again…enough to make sure they were deployed properly and played nicely with other critical features.  Once in QA, we had to dig up previously executed tests, set then up again, and try to remember what we had previously learned about our tests weeks earlier.

With Kanban, we complete our testing in focused chunks.  We still do the bulk of our testing in the development environment.  But when we’re done with that feature, we deploy it (that same day) to a QA environment, and do any additional testing immediately.  This is soooooooo much easier because the tests are fresh in our minds, our SQL scripts are probably still open, and other recent tools are all prepped and ready to go.  When we’re done, we deploy to production and those tests can leave our minds (sometimes forever) to make room for the next Feature.

A Test this Blog reader asked,

“Every few years we look at certifications for our testers. I'm not sure that the QA/testing ones carry the same weight as those for PMs or developers. Do you have any advice on this?”

I’ll start an answer by telling you my opinion and maybe some of my readers will respond to finish.

The only software testing certification I’ve tried to get was from IIST. Read my post, Boycott the International Institute for Software Testing, to understand why I gave up. 

Ever since, I’ve been learning enough to stay engaged and passionate about software testing without certifications.  I’ve been learning at my own pace, following my own needs and interests, by reading software testing blogs, books, thinking, and attending about one testing conference (e.g., CAST, STAR, STPCon) per year.  My “uncertified” testing skills have been rewarded at work via promotions, and this year I will be speaking at my third test conference.  This pace has been satisfying enough for me…sans certifications.

I tend to follow the testers associated with the Context Driven Testing school/approach. These testers have convinced me certifications are not the best way to become a skilled tester.  Certifications tend to reward memorization rather than learning test skills you can use.  The courses (I’m not sure if they are considered certifications) Context Driven Testers seem to rally around are the online Black Box Software Testing courses, Foundations, Bug Advocacy, and Test Design.  I planned  to enroll in the Foundations course this year but I have my first baby coming so I’ve wimped out  on several ambitions, including that.

So, as a fellow Test Manager, I do not encourage certifications on my testers.  Instead I encourage their growth other ways:

  • This year we are holding a private Rapid Software Testing course for our testers.
  • I encourage (and sometimes force) my testers to participate in a testers-teaching-test-skills in-house training thing we do every month.  Testers are asked to figure out what they are good at, and share it with other testers for an hour.
  • We have a small QA Library.  We try to stock it with the latest testing books. I often hand said books to testers when the books are relevant to each tester’s challenges.
  • I encourage extra reading, side projects, and all non-project test-related discussions.
  • We encourage testers to attend conferences and share what they learned when they return.
  • We attend lots of free webinars.  Typically, we’re disappointed and we rip on the presenters, but we still leave the webinar with some new tidbit.

So maybe this will give you other ideas.  Let’s see if we get some comments that are for or against any specific certifications.

You’re probably a good leader just to be asking and thinking about this in the first place.  Thanks for the question. 

I believe testers have the power to either slow down the rate of production deployments or speed them up, without adversely affecting their testing value.

  1. My test career began as a “Quality Cop”.  I believed a large responsibility of my job was preventing things from going to production. 
  2. After talking Michael Bolton’s Rapid Software Testing class, I stopped trying to assure quality, stopped being the team bottleneck, and became a tester.  At this point I was indifferent to what and when things went to production.  I left it in the hands of the stakeholders and did my best to give them enough information to make their decision obvious.
  3. Lately, I’ve become an “Anti-bottleneck Tester”.  I think it’s possible to be an excellent tester, while at the same time, working to keep changes flowing to production.  It probably has something to do with my new perspective after becoming a test manager.  But I still test a considerable amount, so I would like to think I’m not completely warped in the head yet.

Tell me if you agree.  The following are actions testers can do to help things flow to production quicker.

  • When you’re testing new FeatureA and you find bugs that are not caused by the new code (e.g., the bug exists in production), make this clear.  The bug should probably not slow down FeatureA’s prod deployment.  Whether it gets fixed or not should probably be decoupled from FeatureA’s path.  The tester should point this out.
  • Be a champion of flushing out issues before it hits the programmer’s desk.  Don’t get greedy and keep them to yourself.  Don’t think, “I just came up with an awesome test, I know it’s going to fail!”.  No no no tester!  Bad tester!  Don’t do this.  Go warn somebody before they finish coding.
  • Be proactive with your test results.  Don’t wait 4 days to tell your stakeholders what you discovered.  Tell them what you know today!  You may be surprised.  They may say, “thanks, that’s all we really needed to know, let’s get this deployed”.
  • Help your programmers focus.  Work with them.  I’m NOT talking about pair programming.  When they are ready for you to start testing, start testing!  Give them immediate feedback, keep your testing focused on the same feature.  Go back and forth until you’re both done.  Then wrap it up and work on the next one… together.  When possible, don’t multi-task between user stories.
  • Deployments are something to celebrate, not fear.  This relates more to Kanban than Scrum.  If you have faith in your testing then don’t fear deployments.  We have almost daily deployments on my Kanban project now.  This has been a huge change for testers who are used to 4 week deployments.  Enthusiastic testers who take pride in rapid deployments can feel a much needed sense of accomplishment and spread the feeling to the rest of the team.
  • Don’t waste too much time on subjective quality attributes.  Delegate this testing to users or other non-testers who may be thrilled to help. 
  • Don’t test things that don’t need testing.  See my Eight Things You May Not Need To Test post.

Every other development team is running around whining “we’re overworked”, “our deadlines are not feasible”.  Testers have the power to influence their team’s success.  Why not use it for the better?

Last week we celebrated two exciting things on one of my project teams:

  1. Completing our 100th iteration (having used ScrumBut for most of it).
  2. Kicking off the switch to Kanban.

Two colleagues and I have been discussing the pros and cons of switching to Kanban for months.  After convincing ourselves it was worth the experiment, we slowly got buy-in from the rest of the project team and…here we go!

Why did we switch?

  • Our product’s priorities change daily and in many cases users cannot wait until the iteration completes.
  • Scrum came with a bunch of processes that never really helped our team.  We didn’t need daily standups, we didn’t like iteration planning, we spent a lot of time breaking up stories and arguing about how to calculate iteration velocity.  We ran out of problems to discuss in retrospectives and in some cases (IMO) forced ourselves to imagine new ones just to have something to discuss.
  • We’re tired of fighting the work gaps at the start and end of iterations (i.e., testers are bored at the iteration start and slammed at the end, programmers are bored at the iteration end and slammed at the start).
  • Deploying entire builds, filled with lots of new Features forced us to run massive regression tests, and deploy on weekends, during a maintenance window (causing us to work weekends, and forcing our users to wait for Features until weekends).
  • Change is intellectually stimulating.  This team has been together for 6 years and change may help us to use our brains again to make sure we are doing things for the right reasons.  One can never know if another approach works better unless one tries it.

As I write this, I can hear all the Scrum Masters crying out in disgust, “You weren’t doing Scrum correctly if it didn’t work!”  That’s probably true.  But I’ll give part of the blame to the Scrum community, coaches, and consultants.  I think you should strive to do a better job of explaining Scrum to the software development community.  I  hear conflicting advice from smart people frequently (e.g., “your velocity should go up with each iteration”, “your velocity should stay the same with each iteration”, “your velocity should bounce around with each iteration”).

When I was a young kid, my family got the game “Video Clue”.  We invited my grandpa over to play and we all read through the instructions together.  After being confused for a good 30 minutes, my grandpa swiped the pieces off the table and said, “anything with this many rules can’t possibly work”.

Anybody else out there using Kanban?

You don’t need bugs to feel pride about the testing service you provide to your team.  That was my initial message for my post, Avoid Trivial Bugs, Report What Works.  I think I obscured said message by covering too many topics in that post so I’ll take a more focused stab at said topic.

Here is a list of things we (testers) can do to help feel pride in our testing when everything works and we have few to no bugs to report.  Here we go…

  1. Congratulate your programmers on a job well done.  Encourage a small celebration.  Encourage more of the same by asking what they did differently.  Feel pride with them and be grateful to be a part of a successful team.
  2. If you miss the ego boost that accompanies cool bug discovery, brag about your coolest, most creative, most technical test.  You were sure the product would crash and burn but to your surprise, it held up.  Sharing an impressive test is sometimes enough to show you’ve been busy.
  3. Give more test reports (or start giving them).  A test report is a method of summarizing your testing story.  You did a lot.  Share it.
  4. Focus on how quickly whatever you tested has moved from development to production.  Your manager may appreciate this even more than the knowledge that you found a bunch of bugs.  Now you can test even more.
  5. Start a count on a banner or webpage that indicates how many days your team has gone without bugs.
  6. If the reason you didn’t find bugs is because you helped the programmer NOT write bugs from the beginning, then brag about it in your retrospective.
  7. Perform a “self check”; ask another team member to see if they can find any bugs in your Feature.  If they can’t find bugs, you can feel pride in your testing.  If they can find bugs, you can feel pride in the guts it took to expose yourself to failure (and learn another test idea).

What additions can you think of?

Ilya Kurnosov asked an interesting question regarding my Avoid Trivial Bugs, Report What Works post. He was referring to my statement:
 
"Instead of figuring out what works, they are stuck investigating what doesn’t work.”

Ilya asked:

Why did you use "stuck" referring to context of the other testers? Isn't "investigating what doesn’t work" more important than "figuring out what works" (other factors being equal)?

I love that question.  It really made me think.  Here is my answer:

  • If stuff doesn’t work, then investigating why it doesn’t work may be more important than figuring out what works.
  • If we’re not aware of anything that is broken, then figuring out what else works (or what else is not broken) is more important than investigating why something doesn’t work…because there is nothing broken to investigate.

When testers spend their time investigating things that don’t work, rather than figuring out what does work, it is less desirable than the opposite.  Less desirable because it means we’ve got stuff that doesn’t work!  Less desirable to who?  It is less desirable for the development team.  It means there are problems in the way we are developing software. 

An ultimate goal would be bug free software, right?  If skilled testers are not finding any bugs, and they are able to tell the team how the software appears to work, that is a good thing for the development team.  However, it may be a bad thing for the tester. 

  • Many testers feel like failures if they don’t have any issues to investigate. 
  • Many testers are not sure what to do if they don’t have any issues to investigate. 
  • If everything works, many testers get bored.
  • If everything works, there are fewer hero opportunities for many testers. 

I don’t believe things need to be that way.  I‘m interested in exploring ways to have hero moments by delivering good news to the team.  It sounds so natural but it isn’t.  As a tester, it is soooooo much more interesting to tell the team that stuff just doesn’t work.  Now that’s dysfunctional.  Or is it?

And that is the initial thought that sparked my Avoid Trivial Bugs, Report What Works post.

Thanks, Ilya, for making me think.



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.