Look at your calendar (or that of another tester).  How many meetings exist?

My new company is crazy about meetings.  Perhaps it’s the vast numbers of project managers, product owners, and separate teams along the deployment path.  It’s a wonder programmers/testers have time to finish anything.

Skipping meetings works, but is an awkward way to increase test time.  What if you could reduce meetings or at least meeting invites?  Try this.  Express the cost of attending the meeting in units of lost bugs.  If you find, on average, about 1 bug per hour of testing, you might say:

“Sure, I can attend your meeting, but it will cost us 1 lost bug.”

“This week’s meetings cost us 9 lost bugs.”

Obviously some meetings (e.g., design, user story review, bug triage) improve your bug finding, so be selective when choosing to declare the bugs lost cost.

Last week I started testing an update to a complex legacy process.  At first, my head was spinning (it still kind of is).    There are so many inputs and test scenarios...so much I don’t understand.  Where to begin?

I think doing something half-baked now is better than doing something fully-baked later.  If we start planning a rigorous test based on too many assumptions we may not understand what we’re observing. 

In my case, I started with the easiest tests I could think of:

  • Can I trigger the process-under-test? 
  • Can I tell when the process-under-test completes?
  • Can I access any internal error/success logging for said process?
  • If I repeat the process-under-test multiple times, are the results consistent?

If there were a spectrum that showed a focus between learning by not manipulating and learning by manipulating something-under-test, it might look like this:image

My tests started on the left side of the spectrum and worked right.  Now that I can get consistent results, let me see if I can manipulate it and predict its results: 

  • If I pass ValueA to InputA, do the results match my expectations?
  • If I remove ValueA from InputA, do the results return as before?
  • If I pass ValueB to InputA, do the results match my expectations?

As long as my model of the process-under-test matches my above observations, I can start expanding complexity:

  • If I pass ValueA and ValueB to InputA and ValueC and ValueD to InputB, do the results match my expectations?
  • etc.

Now I have something valuable to discuss with the programmer or product owner.  “I’ve done the above tests.  What else can you think of?”.  It’s much easier to have this conversation when you’re not completely green, when you can show some effort.  It’s easier for the programmer or product owner to help when you lead them into the zone.

That worst is over.  The rest is easy.  Now you can really start testing!

Sometimes you just have to do something to get going.  Even if it’s half-baked.

Now that my engineering team is automating beyond the unit test level, this question comes up daily.  I wish there were an easy answer.

If we make a distinction between checking and testing, no “tests” should be automated.  The question instead becomes, which “checks” should be automated?  Let’s go with that.

I’ll tell you what I think below, ranking the more important at the top:

Consider automating checks when they…

  1. can only feasibly be checked by a machine (e.g., complex math, millions of comparisons, diffs, load, performance, precision).  These are checks machines do better than humans.
  2. are important.  Do we need to execute them prior to each build, deployment, or release?  This list of checks will grow over time.  The cost of not automating is less time for the “testing” that helps us learn new information.
  3. can be automated below the presentation layer.  Automating checks at the API layer is considerably less expensive than at the UI layer.  The automated checks will provide faster feedback and be less brittle.
  4. will be repeated frequently.  A simplified decision: Is the time it takes a human to program, maintain, execute, and interpret the automated check’s results over time (e.g., 2 years), less than the time it takes a human to perform said check over the same time span (e.g., 2 years).  This overlaps with #2.
  5. check something that is at risk of failing.  Do we frequently break things when we change this module?
  6. are feasible to automate.  We can’t sufficiently automate tests for things like usability and charisma.
  7. are requested by my project team.  Have a discussion with your project team about which checks to automate.  Weigh your targets with what your team thinks should be automated.
  8. can be automated using existing frameworks and patterns.  It is much cheaper to automate the types of checks you’ve already successfully automated.
  9. are checks.  “Checks” are mundane for humans to perform.  “Tests” are not because they are different each time. 

 

What am I missing?

Most of the testers at my new company do not have programming skills (or at least are not putting them to use).  This is not necessarily a bad thing.  But in our case, many of the products-under-test are perfect candidates for automation (e.g., they are API rich).

We are going through an Agile transformation.  Discussions about tying programmatic checks to “Done” criteria are occurring and most testers are now interested in getting involved with automation.  But how?

I think this is a common challenge.

Here are some ways I have had success getting manual testers involved in automation.  I’ll start with the easiest and work my way down to those requiring more ambition.  A tester wanting to get involved in automation can:

  1. Do unit test reviews with their programmers.  Ask the programmers to walk you through the unit tests.  If you get lost ask questions like, “what would cause this unit test to fail?” or “can you explain the purpose of this test at a domain level?”.
  2. Work with automators to inform the checks they automate.  If you have people focused on writing automated checks, help them determine what automation might help you.  Which checks do you often repeat?  Which are boring?
  3. Design/request a test utility that mocks some crucial interface or makes the invisible visible.  Bounce ideas off your programmers and see if you can design test tools to speed things up.  This is not traditional automation.  But it is automation by some definitions.
  4. Use data-driven automation to author/maintain important checks via a spreadsheet.  This is a brilliant approach because it lets the test automater focus on what they love, designing clever automation.  It lets the tester focus on what they love, designing clever inputs.  Show the tester where the spreadsheet is and how to kick off the automation.
  5. Copy and paste an automated check pattern from an IDE, rename the check and change the inputs and expected results to create new checks.  This takes 0-to-little coding skills.  This is a potential end goal.  If a manual tester gets to this point, buy them a beer and don’t push them further.  This leads to a great deal of value, and going further can get awkward.
  6. Follow an automated check pattern but extend the framework.  Spend some time outside of work learning to code. 
  7. Stand up an automation framework, design automated checks.  Support an Agile team by programming all necessary automated checks.  Spend extensive personal time learning to code.  Read books, write personal programs, take online courses, find a mentor. 

Egads!  It’s been several months since my last post.  Where have I been? 

I’ve transitioned to a new company and an exciting new role as Principal Test Architect.  After spending months trying to understand how my new company operates, I am beginning to get a handle on how we might improve testing.

In addition to my work transition, myself and each member of my family have just synchronously suffered through this year’s nasty flu, and then another round of stomach flu shortly thereafter.  The joys of daycare…

And finally, now that my son, Haakon, has arrived, I’ve been adjusting to my new life with two young children.  1 + 1 <> 2.  

It has been a rough winter.

But alas, my brain is once again telling me, “Oh, that would make a nice blog post”.  So let’s get this thing going again!

If you read part 1, you may be wondering how my automated check performed…

The programmer deployed the seeded bug and I’m happy to report, my automated check found it in 28 seconds! 

Afterwards, he seeded two additional bugs.  The automated check found those as well.  I had to temporarily modify the automated check code to ignore the first bug in order to find the second.  This is because the check stops checking as soon as it finds one problem.  I could tweak the code to collect problems and keep checking but I prefer the current design.

Here is the high level generic design of said check:

Build the golden masters:

  • Make scalable checks - Before test execution, build multiple golden masters per coverage ambition.  This is a one-time-only task (until the golden masters need to be updated per expected changes).
  • Bypass GUI when possible – Each of my golden masters consist of the response XML from a web service call, saved to a file.  Each XML response has over a half a million nodes, which are mapped to a complex GUI.  In my case, my automated check will bypass the GUI.  GUI automation could never have found the above seeded bug in 28 seconds.  My product-under-test takes about 1.5 minutes just to log in and navigate to the module being tested. Waiting for the GUI to refresh after the countless service calls being made in the automated check would have taken hours.
  • Golden masters must be golden! Use a known good source for the service call.  I used Production because my downstream environments are populated with data restored from production.  You could use a test environment as long as it was in a known good state.
  • Use static data - Build the golden masters using service request parameters that return a static response.  In other words, when I call said service in the future, I want the same data returned.  I used service request parameters to pull historical data because I expect it to be the same data next week, month, year, etc.
  • Automate golden master building - I wrote a utility method to build my golden masters.  This is basically re-used code from the test method, which builds the new objects to compare to the golden masters.

Do some testing:

  • Compare - This is the test method.  It calls the code-under-test using the same service request parameters used to build the golden masters.  The XML service response from the code-under-test is then compared to that of the archived golden masters, line-by-line.
  • Ignore expected changes - In my case there are some XML nodes the check ignores.  These are nodes with values I expect to differ.  For example, the CreatedDate node of the service response object will always be different from that of the golden master.
  • Report - If any non-ignored XML line is different, it’s probably a bug, fail the automated check, report the differences with line number and file (see below) references and investigate.
  • Write Files - For my goals, I have 11 different golden masters (to compare with 11 distinct service response objects).  The automated check loops through all 11 golden master scenarios, writing each service response XML to a file.  The automated check doesn’t use the files, they are there for me.  This gives me the option to manually compare suspect new files to golden masters with a diff tool, an effective way of investigating bugs and determining patterns.

I’m feeling quite cocky at the moment.  So cocky that I just asked my lead programmers to secretly insert a bug into the most complex area of the system under test.

Having just finished another epic automated check based on the Golden Master approach I discussed earlier, and seeing that most of the team is on pre-Thanksgiving vacation, this is the perfect time to “seed a bug”.  Theoretically, this new automated check should help put our largest source of regression bugs to rest and I am going to test it.

The programmer says he will hide the needle in the haystack by tomorrow.

I’m waiting…



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.