I noticed one of our development teams was creating new Jira Issues for each bug found during the development cycle.  IMO, this is an antipattern. 

These are the problems it can create, that I can think of:

  • New Jira Issues (bug reports) are creating unneccessry admin work for the whole team. 
    • We see these bug reports cluttering an Agile board.
    • They may have to get prioritized.
    • We have to track them, they have to get assigned, change statuses, get linked, maybe even estimated.
    • They take time to create. 
    • They may cause us to communicate via text rather than conversation.
  • Bug reports mislead lazy people into tracking progress, quality, or team performance by counting bugs.
  • It leads to confusion about how to manage the User Story.  If the User Story is done except for the open bug reports, can we mark the User Story “Done”?  Or do we need to keep the User Story open until the logged bugs get fixed…”Why is this User Story still in progress?  Oh yeah, it’s because of those linked logged bugs”.
  • It’s an indication our acceptance criteria is inadequete.  That is to say, if the acceptance criteria in the User Story is not met, we wouldn’t have to log a bug report.  We would merely NOT mark the Story “Done”.
  • Bug reports may give us an excuse not to fix all bugs…”let’s fix it next Sprint”, “let’s put it on the Product Backlog and fix it some other day”…which means never.
  • It’s probably a sign the team is breaking development into a coding phase and a testing phase.  Instead, we really want the testing and programming to take place in one phase...development. 
  • It probably means the programmer is considering their code “done”, throwing it over the wall to a tester, and moving on to a different Story.  This misleads us on progress.  Untested is as good as nothing.

If the bug is an escape, if it occurs in production.  It’s probably a good idea to log it.

On a production support kanban development team, a process dilema came up.  In the case where something needs to be tested by a tester:

  1. Should the tester perform the testing first in a development environment, then in a production-like environment after the thing-under-test has been packaged and deployed?  Note: in this case, the package/deploy process is handled semi-manually by two separate teams, so there is a delay.
  2. Or, should the tester perform all the testing in a production-like environment after the thing-under-test has been packaged and deployed?

Advantage of scenario 1 above:

  • Dev environment testing shortens the feedback loop.  This would be deep testing.  If problems surface they would be quicker and less risky to fix.  The post-package testing would be shallow testing, answering questions like: did the stuff I deep tested get deployed properly?

Advantage of scenario 2 above:

  • Knock out the testing in one environment.  The deep testing will indirectly cover the package/deployment testing.

From the surface, scenario 2 looks better because it only requires one testing chunk, NOT two chunks separated by a lengthy gap.  But what happens if a problem surfaces in scenario 2?  Now we must go through two lengthy gaps.  How about a third problem?  Three gaps.  And so on.

My conclusion: Scenario 1 is better unless this type of thing-under-test is easy and has a history of zero problems.

A tester asked me an interesting question this morning:

“How can I find old test documentation for a completed feature so I can re-use those tests on a similar new feature?”

The answer is easy.  But that’s not what this post is about. 

It seems to me, a skilled tester can usually come up with better tests…today, from scratch.  Test documentation gets stale fast.  These are some reasons I can think of:

  • A skilled tester knows more about testing today than they did last month.
  • A skilled tester knows more about the product-under-test today than they did last month.
  • The product-under-test is different today than it was last month.  It might have new code, refactored code, more users, more data, a different reputation, a different platform, a different time of the year, etc.
  • The available time to perform tests might be different.
  • The test environment might be different.
  • The product coder might be different.
  • The stakeholders might be different.
  • The automated regression check suite may be different.

If we agree with the above, we’ll probably get better testing when we tailor it to today’s context.  It’s also way more fun to design new tests and probably quicker (unless we are talking about automation, which I am not).

So I think digging up old test documentation as the basis for determing which tests to run today, might be a wrong reason to dig up old test documentation.  A good reason is to answer questions about the testing that was performed last month.

While reading Paul Bloom’s The Baby In The Well article in The New Yorker, I noted the Willie Horton effect’s parallel to software testing:

In 1987, Willie Horton, a convicted murderer who had been released on furlough from the Northeastern Correctional Center, in Massachusetts, raped a woman after beating and tying up her fiancĂ©. The furlough program came to be seen as a humiliating mistake on the part of Governor Michael Dukakis, and was used against him by his opponents during his run for President, the following year. Yet the program may have reduced the likelihood of such incidents. In fact, a 1987 report found that the recidivism rate in Massachusetts dropped in the eleven years after the program was introduced, and that convicts who were furloughed before being released were less likely to go on to commit a crime than those who were not. The trouble is that you can’t point to individuals who weren’t raped, assaulted, or killed as a result of the program, just as you can’t point to a specific person whose life was spared because of vaccination.

How well was a given application tested?  Users don’t know what problems the testers saved them from.  The quality may be celebrated to some extent, but one production bug will get all the press.

If you find an escape (i.e., a bug for something marked “Done”), you may want to develop an automated check for it.  In a meeting today, there was a discussion about when the automated check needed to be developed?  Someone asked, “Should we put a task on the product backlog?”.  IMO:
The automated check should be developed when the bug fix is developed.  It should be part of the “Done” criteria for the bug.
Apply the above heuristically.  If your bug gets deferred to a future Sprint, deffer the automated check to that future Sprint.  If your bug gets fixed in the current Sprint, develop your automated check in the current Sprint.

If a tree falls in the forest and nobody hears it, does it make a sound?  If you have automated checks and nobody knows it, does it make an impact?

To me, the value of any given suite of automated checks depends on its usage...


spectrum

FeatureA will be ready to test soon.  You may want to think about how you will test FeatureA.  Let’s call this activity “Test Planning”.  In Test Planning, you are not actually interacting with the product-under-test.  You are thinking about how you might do it.  Your Test Planning might include, but is not limited to, the following:

  • Make a list of test ideas you can think of.  A Test Idea is the smallest amount of information that can capture the essence of a test.
  • Grok FeatureA:  Analyze the requirements document.  Talk to available people.
  • Interact with the product-under-test before it includes FeatureA.
  • Prepare the test environment data and configurations you will use to test.
  • Note any specific test data you will use.
  • Determine what testing you will need help with (e.g., testing someone else should do).
  • Determine what not to test.
  • Share your test plan with anyone who might care.  At least share the test ideas (first bullet) with the product programmers while they code.
  • If using automation, design the check(s).  Stub them out.

All the above are Test Planning activities.  About four of the above resulted in something you wrote down.  If you wrote them in one place, you have an artifact.  The artifact can be thought of as a Test Plan.  As you begin testing (interacting with the product-under-test), I think you can use the Test Plan one of two ways:

  1. Morph it into “Test Notes” (or “Test Results”).
  2. Refer to it then throw it away.

Either way, we don’t need the Test Plan after the testing.  Just like we don’t need those other above Test Planning activities after the testing.  Plans are more useful before the thing they plan.

Execution is more valuable than a plan.  A goal of a skilled tester is to report on what was learned during testing.  The Test Notes are an excellent way to do this.  Attach the Test Notes to your User Story.  Test Planning is throwaway.

My data warehouse team is adopting automated checking.  Along the way, we are discovering some doubters.  Doubters are a good problem.  They challenge us to make sure automation is appropriate.  In an upcoming meeting, we will try to answer the question in this blog post title.

My short answer:  Yes.

My long answer:  See below.

The following are data warehouse (or database) specific:

  • More suited to machines – Machines are better than humans at examining lots of data quickly. 
  • Not mentally stimulating for humans(this is the other side of the above reason) Manual DB testers are hard to find.  Testers tend to like front-ends so they gravitate toward app dev teams.  DB testers need technical skills (e.g., DB dev skills).  People who have them prefer to do DB dev work.
  • Straight forward repeatable automation patterns – For each new dimension table, we normally want the same types of automated checks.  This makes automated check design easier and faster to code.  The entire DW automation suite contains a smaller amount of design patterns than the average appliction.

The following are not limited to data warehouse (or database):

  • Time to market – (Automated checks) help you go faster.  Randy Shoup says it well at 9:55 in this talk.  Writing quick and dirty software leads to technical debt which leads to no time to do it right (“technical debt viscious cycle”).  Writing automated checks as you write software  leads to a solid foundation which leads to confidence which leads to faster and better (“virtuous cycle of quality”)…Randy’s words.
  • Regression checking - In general, machines are better than humans at indicating something changed. 
  • Get the most from your human testing - Free the humans to focus on deep testing of new features, not shallow testing of old features.
  • In case the business ever changes their mind - If you ever have to revist code to make changes or refactor, automated checks will help you do it quicker.  If you think the business will never change their mind, then maybe automation is not as important.
  • Automated checks help document current functionality.
  • Easier to fix problems - Automated checks triggered in a Continuous Integration find problems right after code is checked in.  These problems are usually easier to fix when fresh in a developer’s mind.

I read A Context-Driven Approach to Automation in Testing at the gym this morning.  I expected the authors to hate on automation but they didn’t.  Bravo.

They contrasted the (popular) perception that automation is cheap and easy because you don’t have to pay the computer, with the (not so popular) perception that automation requires a skilled human to design, code, maintain, and interpret the results of the automation.  That human also wants a paycheck.

Despite the fact that most Automation Engineers are writing superficial automation, the industry still worships automation skills, and for good reasons.  This is intimidating for testers who don’t code, especially when finding themselves working alongside automation engineers. 
Here are some things, I can think of, testers-who-don’t-code can do to help boost thier value:

  • Find more bugs -  This is one of the most valued services a tester can provide.  Scour a software quality characteristics list like this to expand your test coverage be more aggressive with your testing.  You can probably cover way more than automation engineers in a shorter amount of time.  Humans are much better at finding bugs than machines.  Finding bugs is not a realistic goal of automation.
  • Faster Feedback – Everybody wants faster feedback.  Humans can deliver faster feedback than automation engineers on new testing.  Machines are faster on old testing (e.g., regression testing).  Report back on what works and doesn’t while the automation engineer is still writing new test code. 
  • Give better test reportsNobody cares about test results.  Find ways to sneak them in and make them easier to digest.  Shove them into your daily stand-up report (e.g., “based on what I tested yesterday, I learned that these things appear to be working, great job team!”). Give verbal test summaries to your programmers after each and every test session with their code.  Give impromptu test summaries to your Product Owner.
  • Sit with your users – See how they use your product.  Learn what is important to them.
  • Volunteer for unwanted tasks – “I’ll stay late tonight to test the patch”, “I’ll do it this weekend”.  You have a personal life though.  Take back the time.  Take Monday off.
  • Work for your programmers -  Ask what they are concerned about. Ask what they would like you to test.
  • What if? – Show up at design meetings and have a louder presence at Sprint Planning meeting.  Blast the team with relentless “what if” scenarios.  Use your domain expertise and user knowledge to conceive of conflicts.  Remove the explicit assumptions one at a time and challenge the team, even at the risk of being ridiculous (e.g., what if the web server goes down?  what if their phone battery dies?).
  • Do more security testing – Security testing, for the most part, can not be automated.  Develop expertise in this area.
  • Bring new ideas – Read testing blogs and books. Attend conferences. Tweak your processes.  Pilot new ideas. Don’t be status quo.
  • Consider Integration – Talk to the people who build the products that integrate with your product.  Learn how to operate their product and perform integration tests that are otherwise being automated via mocks. You just can’t beat the real thing.
  • Help your automation engineer – Tell them what you think needs to be automated.  Don’t be narrow-minded in determining what to automate.  Ask them which automation they are struggling to write or maintain, then offer to maintain it yourself, with manual testing.
  • Get visible – Ring a bell when you find a bug.  Give out candy when you don’t find a bug.  Wear shirts with testing slogans, etc.
  • Help code automation – You’re not a coder so don’t go building frameworks, designing automation patterns, or even independently designing new automated checks.  Ask if there are straight forward automation patterns you can reuse with new scenarios.  Ask for levels of abstraction that hide the complicated methods and let you focus on business inputs and observations.  Here are other ways to get involved.
What am I missing?

I had a second scenario this week that gave me pause before resulting in the above practice.

ProductA is developed and maintained by ScrumTeamA, who writes automated checks for all User Stories and runs the checks in a CI.  ProductB is developed and maintained by ScrumTeamB.

ScrumTeamB developed UserStoryB, which required new code for both ProductA and ProductB.  ScrumTeamB merged the new product code into ProductA…but did NOT merge new test code to ProductA.  Now we have a problem.  Do you see it?

When ProductA deploys, how can we be sure the dependencies for UserStoyB are included?  All new product code for ProductA should probably be accompanied with new test code, regardless of the Scrum Team making the change.

The same practice might be suggested in environments without automation.  In other words, ScrumTeamB should probably give manual test scripts, lists, test fragments, or do knowledge transfer such that manual testers responsible for ProductA (i.e., ScrumTeamA) can perform the testing for UserStoryB prior to ProductA deployments.

…It seems obvious until you deal with integration tests and products with no automation.  I got tripped up by this example:

ProductA calls ProductB’s service, ServiceB.  Both products are owned by the same dev shop.  ServiceB keeps breaking in production, disrupting ProductA. ProductA has automated checks.  ProductB does NOT have automated checks.  Automated checks for ServiceB might help. Where would the automated checks for ServiceB live?

It’s tempting to say ProductA because ProductA has an automation framework with its automated checks running in a Continuous Integration on merge-to-dev.  It would be much quicker to add said automated checks to ProductA than ProductB.  However, said checks wouldn’t help b/c they would run in ProductA’s CI.  ProductB could still deploy to production with a broken ServiceB.

My lesson learned: Despite the ease of adding a check to ProductA’s CI, the check needs to be coupled with ProductB. 

In my case, until we invest in test automation for ProductB, said check(s) for ServiceB will be checks performed by humans.

While helping some testers, new to automation, I found myself in the unexpected position of trying to sell them on the idea that all test methods should be mutually exclusive.  Meaning, no automated check should depend on any other automated check…automated checks can run in any order…you can run them all, in any order, or you can run just one.
If I could take one test automation rule to my grave, this would be it.  I had forgotten that it was optional.
I know, I know, it’s seems so tempting to break this rule at first;  TestA puts the product-under-test in the perfect state for TestB.  Please don’t fall into this trap. 
Here are some reasons (I can think of) to keep your tests mutually exclusive:

  • The Domino Effect – If TestB depends on TestA, and TestA fails, there is a good change TestB will fail, but not because the functionality TestB is checking fails.  And so on.
  • Making a Check Mix – Once you have a good number of automated checks, you’ll want the freedom to break them into various suites.  You may want a smoke test suite, a regression test suite, a root check for a performance test, or other test missions that require only a handful of checks...dependencies will not allow this.
  • Authoring – While coding an automated check (a new check or updating a check), you will want to execute that check over and over, without having to execute the whole suite.
  • Easily Readable – When you review your automation coverage with your development team or stakeholders, you’ll want readable test methods.  That usually means each test method’s setup is clear.  Everything needed to understand that test method is contained within the scope of the test method.



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.