I had a second scenario this week that gave me pause before resulting in the above practice.

ProductA is developed and maintained by ScrumTeamA, who writes automated checks for all User Stories and runs the checks in a CI.  ProductB is developed and maintained by ScrumTeamB.

ScrumTeamB developed UserStoryB, which required new code for both ProductA and ProductB.  ScrumTeamB merged the new product code into ProductA…but did NOT merge new test code to ProductA.  Now we have a problem.  Do you see it?

When ProductA deploys, how can we be sure the dependencies for UserStoyB are included?  All new product code for ProductA should probably be accompanied with new test code, regardless of the Scrum Team making the change.

The same practice might be suggested in environments without automation.  In other words, ScrumTeamB should probably give manual test scripts, lists, test fragments, or do knowledge transfer such that manual testers responsible for ProductA (i.e., ScrumTeamA) can perform the testing for UserStoryB prior to ProductA deployments.

…It seems obvious until you deal with integration tests and products with no automation.  I got tripped up by this example:

ProductA calls ProductB’s service, ServiceB.  Both products are owned by the same dev shop.  ServiceB keeps breaking in production, disrupting ProductA. ProductA has automated checks.  ProductB does NOT have automated checks.  Automated checks for ServiceB might help. Where would the automated checks for ServiceB live?

It’s tempting to say ProductA because ProductA has an automation framework with its automated checks running in a Continuous Integration on merge-to-dev.  It would be much quicker to add said automated checks to ProductA than ProductB.  However, said checks wouldn’t help b/c they would run in ProductA’s CI.  ProductB could still deploy to production with a broken ServiceB.

My lesson learned: Despite the ease of adding a check to ProductA’s CI, the check needs to be coupled with ProductB. 

In my case, until we invest in test automation for ProductB, said check(s) for ServiceB will be checks performed by humans.

While helping some testers, new to automation, I found myself in the unexpected position of trying to sell them on the idea that all test methods should be mutually exclusive.  Meaning, no automated check should depend on any other automated check…automated checks can run in any order…you can run them all, in any order, or you can run just one.
If I could take one test automation rule to my grave, this would be it.  I had forgotten that it was optional.
I know, I know, it’s seems so tempting to break this rule at first;  TestA puts the product-under-test in the perfect state for TestB.  Please don’t fall into this trap. 
Here are some reasons (I can think of) to keep your tests mutually exclusive:

  • The Domino Effect – If TestB depends on TestA, and TestA fails, there is a good change TestB will fail, but not because the functionality TestB is checking fails.  And so on.
  • Making a Check Mix – Once you have a good number of automated checks, you’ll want the freedom to break them into various suites.  You may want a smoke test suite, a regression test suite, a root check for a performance test, or other test missions that require only a handful of checks...dependencies will not allow this.
  • Authoring – While coding an automated check (a new check or updating a check), you will want to execute that check over and over, without having to execute the whole suite.
  • Easily Readable – When you review your automation coverage with your development team or stakeholders, you’ll want readable test methods.  That usually means each test method’s setup is clear.  Everything needed to understand that test method is contained within the scope of the test method.

I’m a written-test-case hater.  That is to say, in general, I think writing detailed test cases is not a good use of tester time.  A better use is interacting with the product-under-test.

But something occurred to me today:

The value of a detailed test case increases if you don’t perform it and decreases when you do perform it.

  • The increased value comes from mentally walking through the test, which forces you to consider as many details as you can without interacting with the product-under-test.  This is more valuable than doing nothing.
  • The decreased value comes from interacting with the product-under-test, which helps you learn more than the test case itself taught you.

What’s the takeaway?  If an important test is too complicated to perform, we should at least consider writing a detailed test case for it.  If you think you can perform the test, you should consider not writing a detailed test case and instead focusing on the performance and taking notes to capture your learning as it occurs.

An import bug escaped into production this week.  The root cause analysis took us to the usual place; “If we had more test time, we would have caught it.”

I’ve been down this road so many times, I’m beginning to see things differently.  No, even with more test time we probably would not have caught it.  Said bug would have only been caught via a rigorous end-to-end test that would have arguably been several times more expensive than this showstopper production bug will be to fix. 

Our reasonable end-to-end tests include so many fakes (to simulate production) that their net just isn’t big enough.

However, I suspect a mental end-to-end walkthrough, without fakes, may have caught the bug.  And possibly, attention to the “follow-through” may have been sufficient.  The “follow-through” is a term I first heard Microsoft’s famous tester, Michael Hunter, use.  The “follow-through” is what might happen next, per the end state of some test you just performed.

Let’s unpack that:  Pick any test, let’s say you test a feature to allow a user to add a product to an online store.  You test the hell out of it until you reach a stopping point.  What’s the follow-on test?  The follow-on test is to see what can happen to that product once it has been added to the online store.  You can buy it, you can delete it, you can let it get stale, you can discount it, etc…  I’m thinking nearly every test has several follow-on tests.

We never have enough of them.  They never mirror production.  They never work.

My opinions at my current company:

  1. Who should own test environments?  Testers.
  2. Who should build test environments?  NOT testers.  DevOps.
  3. Who should request test environments?  Testers.
  4. Who should populate, backup, or restore the test data in test environments?  Testers.
  5. Who should configure test environments to integrate with other applications in the system?  NOT testers.  DevOps.
  6. Who should deploy code to test environments?  NOT testers. Whoever (or whatever) deploys code to production.
  7. Who should control (e.g., request) code changes to test environments?  Testers.
  8. Who should create and maintain build/deploy automation?  NOT testers.  DevOps.
  9. Who should push the “Go” button to programmatically spin up temporary test environments?  Testers or test automation.

Fiddling with test environments is not testing work, IMO.  It only subtracts from test coverage.

I’m a Dr. Neil deGrasse Tyson fanboy.  In this video, he pokes fun of a common view of scientists.  A view that when scientists think they’ve figure something out, they stop investigating and just sit around, proud of themselves.  Neil says, “[scientists] never leave the drawing board”.  They keep investigating and always embrace new evidence, especially when it contradicts current theories.

In other words, a scientist must trade closure for a continued search for truth.  “Done” is not the desired state.

As a tester, I have often been exhausted, eager to make the claim. “it works…my job here is done”.  And even when faced with contradicting evidence, I have found myself brushing it away, or hoping it is merely a user problem.

Skilled testers will relate.  Test work can chew us up and spit us out if we don’t have the right perspective.  Don’t burden yourself by approaching test work as something you are responsible for ending.



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.