I noticed one of our development teams was creating new Jira Issues for each bug found during the development cycle. IMO, this is an antipattern.
These are the problems it can create, that I can think of:
- New Jira Issues (bug reports) are creating unneccessry admin work for the whole team.
- We see these bug reports cluttering an Agile board.
- They may have to get prioritized.
- We have to track them, they have to get assigned, change statuses, get linked, maybe even estimated.
- They take time to create.
- They may cause us to communicate via text rather than conversation.
- Bug reports mislead lazy people into tracking progress, quality, or team performance by counting bugs.
- It leads to confusion about how to manage the User Story. If the User Story is done except for the open bug reports, can we mark the User Story “Done”? Or do we need to keep the User Story open until the logged bugs get fixed…”Why is this User Story still in progress? Oh yeah, it’s because of those linked logged bugs”.
- It’s an indication our acceptance criteria is inadequete. That is to say, if the acceptance criteria in the User Story is not met, we wouldn’t have to log a bug report. We would merely NOT mark the Story “Done”.
- Bug reports may give us an excuse not to fix all bugs…”let’s fix it next Sprint”, “let’s put it on the Product Backlog and fix it some other day”…which means never.
- It’s probably a sign the team is breaking development into a coding phase and a testing phase. Instead, we really want the testing and programming to take place in one phase...development.
- It probably means the programmer is considering their code “done”, throwing it over the wall to a tester, and moving on to a different Story. This misleads us on progress. Untested is as good as nothing.
If the bug is an escape, if it occurs in production. It’s probably a good idea to log it.
On a production support kanban development team, a process dilema came up. In the case where something needs to be tested by a tester:
- Should the tester perform the testing first in a development environment, then in a production-like environment after the thing-under-test has been packaged and deployed? Note: in this case, the package/deploy process is handled semi-manually by two separate teams, so there is a delay.
- Or, should the tester perform all the testing in a production-like environment after the thing-under-test has been packaged and deployed?
Advantage of scenario 1 above:
- Dev environment testing shortens the feedback loop. This would be deep testing. If problems surface they would be quicker and less risky to fix. The post-package testing would be shallow testing, answering questions like: did the stuff I deep tested get deployed properly?
Advantage of scenario 2 above:
- Knock out the testing in one environment. The deep testing will indirectly cover the package/deployment testing.
From the surface, scenario 2 looks better because it only requires one testing chunk, NOT two chunks separated by a lengthy gap. But what happens if a problem surfaces in scenario 2? Now we must go through two lengthy gaps. How about a third problem? Three gaps. And so on.
My conclusion: Scenario 1 is better unless this type of thing-under-test is easy and has a history of zero problems.
Start From Scratch vs. Old Test Documentation
6 comments Posted by Eric Jacobson at Wednesday, March 30, 2016A tester asked me an interesting question this morning:
“How can I find old test documentation for a completed feature so I can re-use those tests on a similar new feature?”
The answer is easy. But that’s not what this post is about.
It seems to me, a skilled tester can usually come up with better tests…today, from scratch. Test documentation gets stale fast. These are some reasons I can think of:
- A skilled tester knows more about testing today than they did last month.
- A skilled tester knows more about the product-under-test today than they did last month.
- The product-under-test is different today than it was last month. It might have new code, refactored code, more users, more data, a different reputation, a different platform, a different time of the year, etc.
- The available time to perform tests might be different.
- The test environment might be different.
- The product coder might be different.
- The stakeholders might be different.
- The automated regression check suite may be different.
If we agree with the above, we’ll probably get better testing when we tailor it to today’s context. It’s also way more fun to design new tests and probably quicker (unless we are talking about automation, which I am not).
So I think digging up old test documentation as the basis for determing which tests to run today, might be a wrong reason to dig up old test documentation. A good reason is to answer questions about the testing that was performed last month.
The Willie Horton Effect In Software Testing
0 comments Posted by Eric Jacobson at Monday, March 21, 2016While reading Paul Bloom’s The Baby In The Well article in The New Yorker, I noted the Willie Horton effect’s parallel to software testing:
In 1987, Willie Horton, a convicted murderer who had been released on furlough from the Northeastern Correctional Center, in Massachusetts, raped a woman after beating and tying up her fiancĂ©. The furlough program came to be seen as a humiliating mistake on the part of Governor Michael Dukakis, and was used against him by his opponents during his run for President, the following year. Yet the program may have reduced the likelihood of such incidents. In fact, a 1987 report found that the recidivism rate in Massachusetts dropped in the eleven years after the program was introduced, and that convicts who were furloughed before being released were less likely to go on to commit a crime than those who were not. The trouble is that you can’t point to individuals who weren’t raped, assaulted, or killed as a result of the program, just as you can’t point to a specific person whose life was spared because of vaccination.
How well was a given application tested? Users don’t know what problems the testers saved them from. The quality may be celebrated to some extent, but one production bug will get all the press.
Couple Automated Checks With Product Bugs
3 comments Posted by Eric Jacobson at Monday, March 14, 2016If you find an escape (i.e., a bug for something marked “Done”), you may want to develop an automated check for it. In a meeting today, there was a discussion about when the automated check needed to be developed? Someone asked, “Should we put a task on the product backlog?”. IMO:
The automated check should be developed when the bug fix is developed. It should be part of the “Done” criteria for the bug.
Apply the above heuristically. If your bug gets deferred to a future Sprint, deffer the automated check to that future Sprint. If your bug gets fixed in the current Sprint, develop your automated check in the current Sprint.
Get Your Automated Checks In Their Face
1 comments Posted by Eric Jacobson at Thursday, March 03, 2016Test Planning Is Throwaway, Testing Is Forever
4 comments Posted by Eric Jacobson at Monday, February 29, 2016FeatureA will be ready to test soon. You may want to think about how you will test FeatureA. Let’s call this activity “Test Planning”. In Test Planning, you are not actually interacting with the product-under-test. You are thinking about how you might do it. Your Test Planning might include, but is not limited to, the following:
- Make a list of test ideas you can think of. A Test Idea is the smallest amount of information that can capture the essence of a test.
- Grok FeatureA: Analyze the requirements document. Talk to available people.
- Interact with the product-under-test before it includes FeatureA.
- Prepare the test environment data and configurations you will use to test.
- Note any specific test data you will use.
- Determine what testing you will need help with (e.g., testing someone else should do).
- Determine what not to test.
- Share your test plan with anyone who might care. At least share the test ideas (first bullet) with the product programmers while they code.
- If using automation, design the check(s). Stub them out.
All the above are Test Planning activities. About four of the above resulted in something you wrote down. If you wrote them in one place, you have an artifact. The artifact can be thought of as a Test Plan. As you begin testing (interacting with the product-under-test), I think you can use the Test Plan one of two ways:
- Morph it into “Test Notes” (or “Test Results”).
- Refer to it then throw it away.
Either way, we don’t need the Test Plan after the testing. Just like we don’t need those other above Test Planning activities after the testing. Plans are more useful before the thing they plan.
Execution is more valuable than a plan. A goal of a skilled tester is to report on what was learned during testing. The Test Notes are an excellent way to do this. Attach the Test Notes to your User Story. Test Planning is throwaway.