FeatureA will be ready to test soon.  You may want to think about how you will test FeatureA.  Let’s call this activity “Test Planning”.  In Test Planning, you are not actually interacting with the product-under-test.  You are thinking about how you might do it.  Your Test Planning might include, but is not limited to, the following:

  • Make a list of test ideas you can think of.  A Test Idea is the smallest amount of information that can capture the essence of a test.
  • Grok FeatureA:  Analyze the requirements document.  Talk to available people.
  • Interact with the product-under-test before it includes FeatureA.
  • Prepare the test environment data and configurations you will use to test.
  • Note any specific test data you will use.
  • Determine what testing you will need help with (e.g., testing someone else should do).
  • Determine what not to test.
  • Share your test plan with anyone who might care.  At least share the test ideas (first bullet) with the product programmers while they code.
  • If using automation, design the check(s).  Stub them out.

All the above are Test Planning activities.  About four of the above resulted in something you wrote down.  If you wrote them in one place, you have an artifact.  The artifact can be thought of as a Test Plan.  As you begin testing (interacting with the product-under-test), I think you can use the Test Plan one of two ways:

  1. Morph it into “Test Notes” (or “Test Results”).
  2. Refer to it then throw it away.

Either way, we don’t need the Test Plan after the testing.  Just like we don’t need those other above Test Planning activities after the testing.  Plans are more useful before the thing they plan.

Execution is more valuable than a plan.  A goal of a skilled tester is to report on what was learned during testing.  The Test Notes are an excellent way to do this.  Attach the Test Notes to your User Story.  Test Planning is throwaway.

My data warehouse team is adopting automated checking.  Along the way, we are discovering some doubters.  Doubters are a good problem.  They challenge us to make sure automation is appropriate.  In an upcoming meeting, we will try to answer the question in this blog post title.

My short answer:  Yes.

My long answer:  See below.

The following are data warehouse (or database) specific:

  • More suited to machines – Machines are better than humans at examining lots of data quickly. 
  • Not mentally stimulating for humans(this is the other side of the above reason) Manual DB testers are hard to find.  Testers tend to like front-ends so they gravitate toward app dev teams.  DB testers need technical skills (e.g., DB dev skills).  People who have them prefer to do DB dev work.
  • Straight forward repeatable automation patterns – For each new dimension table, we normally want the same types of automated checks.  This makes automated check design easier and faster to code.  The entire DW automation suite contains a smaller amount of design patterns than the average appliction.

The following are not limited to data warehouse (or database):

  • Time to market – (Automated checks) help you go faster.  Randy Shoup says it well at 9:55 in this talk.  Writing quick and dirty software leads to technical debt which leads to no time to do it right (“technical debt viscious cycle”).  Writing automated checks as you write software  leads to a solid foundation which leads to confidence which leads to faster and better (“virtuous cycle of quality”)…Randy’s words.
  • Regression checking - In general, machines are better than humans at indicating something changed. 
  • Get the most from your human testing - Free the humans to focus on deep testing of new features, not shallow testing of old features.
  • In case the business ever changes their mind - If you ever have to revist code to make changes or refactor, automated checks will help you do it quicker.  If you think the business will never change their mind, then maybe automation is not as important.
  • Automated checks help document current functionality.
  • Easier to fix problems - Automated checks triggered in a Continuous Integration find problems right after code is checked in.  These problems are usually easier to fix when fresh in a developer’s mind.

I read A Context-Driven Approach to Automation in Testing at the gym this morning.  I expected the authors to hate on automation but they didn’t.  Bravo.

They contrasted the (popular) perception that automation is cheap and easy because you don’t have to pay the computer, with the (not so popular) perception that automation requires a skilled human to design, code, maintain, and interpret the results of the automation.  That human also wants a paycheck.

Despite the fact that most Automation Engineers are writing superficial automation, the industry still worships automation skills, and for good reasons.  This is intimidating for testers who don’t code, especially when finding themselves working alongside automation engineers. 
Here are some things, I can think of, testers-who-don’t-code can do to help boost thier value:

  • Find more bugs -  This is one of the most valued services a tester can provide.  Scour a software quality characteristics list like this to expand your test coverage be more aggressive with your testing.  You can probably cover way more than automation engineers in a shorter amount of time.  Humans are much better at finding bugs than machines.  Finding bugs is not a realistic goal of automation.
  • Faster Feedback – Everybody wants faster feedback.  Humans can deliver faster feedback than automation engineers on new testing.  Machines are faster on old testing (e.g., regression testing).  Report back on what works and doesn’t while the automation engineer is still writing new test code. 
  • Give better test reportsNobody cares about test results.  Find ways to sneak them in and make them easier to digest.  Shove them into your daily stand-up report (e.g., “based on what I tested yesterday, I learned that these things appear to be working, great job team!”). Give verbal test summaries to your programmers after each and every test session with their code.  Give impromptu test summaries to your Product Owner.
  • Sit with your users – See how they use your product.  Learn what is important to them.
  • Volunteer for unwanted tasks – “I’ll stay late tonight to test the patch”, “I’ll do it this weekend”.  You have a personal life though.  Take back the time.  Take Monday off.
  • Work for your programmers -  Ask what they are concerned about. Ask what they would like you to test.
  • What if? – Show up at design meetings and have a louder presence at Sprint Planning meeting.  Blast the team with relentless “what if” scenarios.  Use your domain expertise and user knowledge to conceive of conflicts.  Remove the explicit assumptions one at a time and challenge the team, even at the risk of being ridiculous (e.g., what if the web server goes down?  what if their phone battery dies?).
  • Do more security testing – Security testing, for the most part, can not be automated.  Develop expertise in this area.
  • Bring new ideas – Read testing blogs and books. Attend conferences. Tweak your processes.  Pilot new ideas. Don’t be status quo.
  • Consider Integration – Talk to the people who build the products that integrate with your product.  Learn how to operate their product and perform integration tests that are otherwise being automated via mocks. You just can’t beat the real thing.
  • Help your automation engineer – Tell them what you think needs to be automated.  Don’t be narrow-minded in determining what to automate.  Ask them which automation they are struggling to write or maintain, then offer to maintain it yourself, with manual testing.
  • Get visible – Ring a bell when you find a bug.  Give out candy when you don’t find a bug.  Wear shirts with testing slogans, etc.
  • Help code automation – You’re not a coder so don’t go building frameworks, designing automation patterns, or even independently designing new automated checks.  Ask if there are straight forward automation patterns you can reuse with new scenarios.  Ask for levels of abstraction that hide the complicated methods and let you focus on business inputs and observations.  Here are other ways to get involved.
What am I missing?

I had a second scenario this week that gave me pause before resulting in the above practice.

ProductA is developed and maintained by ScrumTeamA, who writes automated checks for all User Stories and runs the checks in a CI.  ProductB is developed and maintained by ScrumTeamB.

ScrumTeamB developed UserStoryB, which required new code for both ProductA and ProductB.  ScrumTeamB merged the new product code into ProductA…but did NOT merge new test code to ProductA.  Now we have a problem.  Do you see it?

When ProductA deploys, how can we be sure the dependencies for UserStoyB are included?  All new product code for ProductA should probably be accompanied with new test code, regardless of the Scrum Team making the change.

The same practice might be suggested in environments without automation.  In other words, ScrumTeamB should probably give manual test scripts, lists, test fragments, or do knowledge transfer such that manual testers responsible for ProductA (i.e., ScrumTeamA) can perform the testing for UserStoryB prior to ProductA deployments.

…It seems obvious until you deal with integration tests and products with no automation.  I got tripped up by this example:

ProductA calls ProductB’s service, ServiceB.  Both products are owned by the same dev shop.  ServiceB keeps breaking in production, disrupting ProductA. ProductA has automated checks.  ProductB does NOT have automated checks.  Automated checks for ServiceB might help. Where would the automated checks for ServiceB live?

It’s tempting to say ProductA because ProductA has an automation framework with its automated checks running in a Continuous Integration on merge-to-dev.  It would be much quicker to add said automated checks to ProductA than ProductB.  However, said checks wouldn’t help b/c they would run in ProductA’s CI.  ProductB could still deploy to production with a broken ServiceB.

My lesson learned: Despite the ease of adding a check to ProductA’s CI, the check needs to be coupled with ProductB. 

In my case, until we invest in test automation for ProductB, said check(s) for ServiceB will be checks performed by humans.



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.