I noticed one of our development teams was creating new Jira Issues for each bug found during the development cycle. IMO, this is an antipattern.
These are the problems it can create, that I can think of:
- New Jira Issues (bug reports) are creating unneccessry admin work for the whole team.
- We see these bug reports cluttering an Agile board.
- They may have to get prioritized.
- We have to track them, they have to get assigned, change statuses, get linked, maybe even estimated.
- They take time to create.
- They may cause us to communicate via text rather than conversation.
- Bug reports mislead lazy people into tracking progress, quality, or team performance by counting bugs.
- It leads to confusion about how to manage the User Story. If the User Story is done except for the open bug reports, can we mark the User Story “Done”? Or do we need to keep the User Story open until the logged bugs get fixed…”Why is this User Story still in progress? Oh yeah, it’s because of those linked logged bugs”.
- It’s an indication our acceptance criteria is inadequete. That is to say, if the acceptance criteria in the User Story is not met, we wouldn’t have to log a bug report. We would merely NOT mark the Story “Done”.
- Bug reports may give us an excuse not to fix all bugs…”let’s fix it next Sprint”, “let’s put it on the Product Backlog and fix it some other day”…which means never.
- It’s probably a sign the team is breaking development into a coding phase and a testing phase. Instead, we really want the testing and programming to take place in one phase...development.
- It probably means the programmer is considering their code “done”, throwing it over the wall to a tester, and moving on to a different Story. This misleads us on progress. Untested is as good as nothing.
If the bug is an escape, if it occurs in production. It’s probably a good idea to log it.
Ways To Boost The Value of Testers Who Don’t Code
7 comments Posted by Eric Jacobson at Wednesday, February 10, 2016Despite the fact that most Automation Engineers are writing superficial automation, the industry still worships automation skills, and for good reasons. This is intimidating for testers who don’t code, especially when finding themselves working alongside automation engineers.
Here are some things, I can think of, testers-who-don’t-code can do to help boost thier value:
- Find more bugs - This is one of the most valued services a tester can provide. Scour a software quality characteristics list like this to expand your test coverage be more aggressive with your testing. You can probably cover way more than automation engineers in a shorter amount of time. Humans are much better at finding bugs than machines. Finding bugs is not a realistic goal of automation.
- Faster Feedback – Everybody wants faster feedback. Humans can deliver faster feedback than automation engineers on new testing. Machines are faster on old testing (e.g., regression testing). Report back on what works and doesn’t while the automation engineer is still writing new test code.
- Give better test reports – Nobody cares about test results. Find ways to sneak them in and make them easier to digest. Shove them into your daily stand-up report (e.g., “based on what I tested yesterday, I learned that these things appear to be working, great job team!”). Give verbal test summaries to your programmers after each and every test session with their code. Give impromptu test summaries to your Product Owner.
- Sit with your users – See how they use your product. Learn what is important to them.
- Volunteer for unwanted tasks – “I’ll stay late tonight to test the patch”, “I’ll do it this weekend”. You have a personal life though. Take back the time. Take Monday off.
- Work for your programmers - Ask what they are concerned about. Ask what they would like you to test.
- What if? – Show up at design meetings and have a louder presence at Sprint Planning meeting. Blast the team with relentless “what if” scenarios. Use your domain expertise and user knowledge to conceive of conflicts. Remove the explicit assumptions one at a time and challenge the team, even at the risk of being ridiculous (e.g., what if the web server goes down? what if their phone battery dies?).
- Do more security testing – Security testing, for the most part, can not be automated. Develop expertise in this area.
- Bring new ideas – Read testing blogs and books. Attend conferences. Tweak your processes. Pilot new ideas. Don’t be status quo.
- Consider Integration – Talk to the people who build the products that integrate with your product. Learn how to operate their product and perform integration tests that are otherwise being automated via mocks. You just can’t beat the real thing.
- Help your automation engineer – Tell them what you think needs to be automated. Don’t be narrow-minded in determining what to automate. Ask them which automation they are struggling to write or maintain, then offer to maintain it yourself, with manual testing.
- Get visible – Ring a bell when you find a bug. Give out candy when you don’t find a bug. Wear shirts with testing slogans, etc.
- Help code automation – You’re not a coder so don’t go building frameworks, designing automation patterns, or even independently designing new automated checks. Ask if there are straight forward automation patterns you can reuse with new scenarios. Ask for levels of abstraction that hide the complicated methods and let you focus on business inputs and observations. Here are other ways to get involved.
Getting Manual Testers Involved in Automation
2 comments Posted by Eric Jacobson at Friday, March 27, 2015Most of the testers at my new company do not have programming skills (or at least are not putting them to use). This is not necessarily a bad thing. But in our case, many of the products-under-test are perfect candidates for automation (e.g., they are API rich).
We are going through an Agile transformation. Discussions about tying programmatic checks to “Done” criteria are occurring and most testers are now interested in getting involved with automation. But how?
I think this is a common challenge.
Here are some ways I have had success getting manual testers involved in automation. I’ll start with the easiest and work my way down to those requiring more ambition. A tester wanting to get involved in automation can:
- Do unit test reviews with their programmers. Ask the programmers to walk you through the unit tests. If you get lost ask questions like, “what would cause this unit test to fail?” or “can you explain the purpose of this test at a domain level?”.
- Work with automators to inform the checks they automate. If you have people focused on writing automated checks, help them determine what automation might help you. Which checks do you often repeat? Which are boring?
- Design/request a test utility that mocks some crucial interface or makes the invisible visible. Bounce ideas off your programmers and see if you can design test tools to speed things up. This is not traditional automation. But it is automation by some definitions.
- Use data-driven automation to author/maintain important checks via a spreadsheet. This is a brilliant approach because it lets the test automater focus on what they love, designing clever automation. It lets the tester focus on what they love, designing clever inputs. Show the tester where the spreadsheet is and how to kick off the automation.
- Copy and paste an automated check pattern from an IDE, rename the check and change the inputs and expected results to create new checks. This takes 0-to-little coding skills. This is a potential end goal. If a manual tester gets to this point, buy them a beer and don’t push them further. This leads to a great deal of value, and going further can get awkward.
- Follow an automated check pattern but extend the framework. Spend some time outside of work learning to code.
- Stand up an automation framework, design automated checks. Support an Agile team by programming all necessary automated checks. Spend extensive personal time learning to code. Read books, write personal programs, take online courses, find a mentor.
Test Your Testing With Bug Seeding – Part 2
1 comments Posted by Eric Jacobson at Wednesday, November 26, 2014If you read part 1, you may be wondering how my automated check performed…
The programmer deployed the seeded bug and I’m happy to report, my automated check found it in 28 seconds!
Afterwards, he seeded two additional bugs. The automated check found those as well. I had to temporarily modify the automated check code to ignore the first bug in order to find the second. This is because the check stops checking as soon as it finds one problem. I could tweak the code to collect problems and keep checking but I prefer the current design.
Here is the high level generic design of said check:
Build the golden masters:
- Make scalable checks - Before test execution, build multiple golden masters per coverage ambition. This is a one-time-only task (until the golden masters need to be updated per expected changes).
- Bypass GUI when possible – Each of my golden masters consist of the response XML from a web service call, saved to a file. Each XML response has over a half a million nodes, which are mapped to a complex GUI. In my case, my automated check will bypass the GUI. GUI automation could never have found the above seeded bug in 28 seconds. My product-under-test takes about 1.5 minutes just to log in and navigate to the module being tested. Waiting for the GUI to refresh after the countless service calls being made in the automated check would have taken hours.
- Golden masters must be golden! Use a known good source for the service call. I used Production because my downstream environments are populated with data restored from production. You could use a test environment as long as it was in a known good state.
- Use static data - Build the golden masters using service request parameters that return a static response. In other words, when I call said service in the future, I want the same data returned. I used service request parameters to pull historical data because I expect it to be the same data next week, month, year, etc.
- Automate golden master building - I wrote a utility method to build my golden masters. This is basically re-used code from the test method, which builds the new objects to compare to the golden masters.
Do some testing:
- Compare - This is the test method. It calls the code-under-test using the same service request parameters used to build the golden masters. The XML service response from the code-under-test is then compared to that of the archived golden masters, line-by-line.
- Ignore expected changes - In my case there are some XML nodes the check ignores. These are nodes with values I expect to differ. For example, the CreatedDate node of the service response object will always be different from that of the golden master.
- Report - If any non-ignored XML line is different, it’s probably a bug, fail the automated check, report the differences with line number and file (see below) references and investigate.
- Write Files - For my goals, I have 11 different golden masters (to compare with 11 distinct service response objects). The automated check loops through all 11 golden master scenarios, writing each service response XML to a file. The automated check doesn’t use the files, they are there for me. This gives me the option to manually compare suspect new files to golden masters with a diff tool, an effective way of investigating bugs and determining patterns.
A Suggestion For Software Testing Communities
6 comments Posted by Eric Jacobson at Tuesday, January 28, 2014Warning: This has very little to do with testing.
Additional Warning: I’m about to gripe.
I attended the 3rd Software Testing Club Atlanta meetup Wednesday. Some of the meeting was spent fiddling with a virtual task board, attempting to accommodate the local people who dialed in to the meeting.
IT is currently crazy about low tech dashboards (e.g., sticky notes on a wall). But we keep trying to virtualize them. IMO, virtualizing stickies on a wall is silly. The purpose is to huddle around, in-person, and ditch the complicated software that so often wastes more time than it saves.
IMO, the whole purpose of a local testing club that meets over beer and pizza is to meet over beer and pizza...in person, and engage in the kind of efficient discussion that is best done in person. Anything else defeats the purpose of a “local” testing club. If I wanted to dial in and talk about testing over the phone, it wouldn’t have to be with local people.
I’m sad to see in-person meetings increasingly get replaced by this. But IMO, joining virtual people to real-life meetings, can be even worse. Either make everyone virtual or make everyone meet physically.
Yes, I’m a virtual meeting curmudgeon. I accept that virtual connections have their advantages and I allow my team to work from home as frequently as three days a week on a regular basis. But I still firmly believe, you can’t beat good old fashioned, real-life, in-person discussions.
It’s a cliché, I know. But it really gave me pause when I heard Jeff “Cheezy” Morgan say it during his excellent STAReast track session, “Android Mobile Testing: Right Before Your Eyes”. He said something like, “instead of looking for bugs, why not focus on preventing them?”.
Cheezy demonstrated Acceptance Test Driven Development (ATDD) by giving a live demo, writing Ruby tests via Cucumber, for product code that didn’t exist. The tests failed until David Shah, Cheezy’s programmer, wrote the product code to make them pass.
(Actually, the tests never passed, which they later blamed on incompatible Ruby versions…ouch. But I’ll give these two guys the benefit of the doubt. )
Now back to my blog post title. I find this mindshift appealing for several reasons, some of which Cheezy pointed out and some of which he did not:
- Per Cheezy’s rough estimate 8/10 bugs involve the UI. There is tremendous benefit to the programmer knowing about these UI bugs while the programmer is writing the UI initially. Thus, why not have our testers begin performing exploratory testing before the Story is code complete?
- Programmers are often incentivized to get something code completed so the testers can have it (and so the programmers can work on the next thing). What if we could convince programmers it’s not code complete until it’s tested?
- Maybe the best time to review a Story is when the team is actually about to start working on it; not at the beginning of a Sprint. And what do we mean when we say the team is actually about to start working on it?
- First we (Tester, Programmer, Business Analyst) write a bunch of acceptance tests.
- Then, we start writing code as we start executing those tests.
- Yes, this is ATDD, but I don’t think automation is as important as the consultants say. More on that in a future post.
- Logging bugs is soooooo time consuming and can lead to dysfunction. The bug reports have to be managed and routed appropriately. People can’t help but count them and use them as measurements for something…success or failure. If we are doing bug prevention, we never need to create bug reports.
Okay, I’m starting to bore myself, so I’ll stop. Next time I want to explore Manual ATDD.
I’m not asking if they *can* run unattended. I’m asking if they do run unattended…consistently…without ever failing to start, hanging, or requiring any human intervention whatsoever…EVER.
Automators, be careful. If you tell too many stories about unattended check-suite runs, the non-automators just might start believing you. And guess what will happen if they start running your checks? You know that sound when Pac-Man dies, that’s what they’ll think of your automated checks.
I remember hearing a QA Director attempt to encourage “test automation” by telling fantastical stories of his tester past:
“We used to kick off our automated tests at 2PM and then go home for the day. The next day, we would just look at the execution results and be done.”
Years later, I’ve learned to be cynical about said stories. In fact, I have yet to see an automated test suite (including my own) that consistently runs without ever requiring the slightest intervention from humans, who unknowingly may:
- Prep the test environment “just right” before clicking “Run”.
- Restart the suite when it hangs and hope the anomaly goes away.
- Re-run the failed checks because they normally pass on the next attempt.
- Realize the suite works better when kicked off in smaller chunks.
- Recognize that sweet spot, between server maintenance windows, where the checks have a history of happily running without hardware interruptions.
In response to my Don’t Forget To Look Before You Log post, bugs4tester asked how to prevent duplicate bug reports in large project teams that have accumulated >300 bug reports.
That's a pretty good question. Here are some things that come to mind:
- Consider not logging some bugs. One of my project teams does all new feature testing in our development environment. The bugs get fixed as they are found, so bug reports are not necessary. Exceptions include:
- Bugs we decide not to fix.
- Bugs found in our QA environment. We are SOX compliant and the auditors like seeing bug reports to prove the code change is necessary.
- “Escapes” – Bugs found in production.
- Readers Ken and Camal Cakar suggested it may be better to err on the side of logging duplicate bug reports, than taking the time to go dupe hunting or worse, mistakenly assuming the bug is already logged. I agree. Maybe we can use a 120-Second-Dupe-BugReport-Search heuristic; “If I can’t determine whether or not this bug is logged within 120 seconds, I will log it.”
- Yes, it takes time to "look before you log", but you may gain that time back. If, every so often, you find that a bug report already exists, you are saving the time it would have taken you to log the bug. You are also saving the time it would have taken other team members encountering the bug report to sort through their confusion (e.g., “Hey, didn’t we already fix this bug?”). IMO, dupes cause people to stare at each, trying to determine the difference, long after their context has faded.
- "Look before you log" time can be reduced with bug report repository organization. Examples include:
- Can you assign bug reports to modules or other categories?
- Can the team agree on a standard naming scheme? For example: always list the name of the screen or report in the bug report title.
- Does your bug repository provide a keyword search that can only search bug report Title or Description? If not, can you access the bug repository DB to write your own?
- Can you use keyboard shortcuts or assign hotkeys to dupe bug report searches?
- Sometimes you don’t have to “look before you log”. When testing new functionality, I think most testers know when they have discovered a bug that could not have existed with prior code. On the other hand, some testers can recognize recurring bugs that have been around for years; in these cases the tester may already know it is logged.
Thanks for the fun question. I hope one of my suggestions helps.
Testing, Everybody’s An Expert in Hindsight
1 comments Posted by Eric Jacobson at Wednesday, September 05, 2012I just came from an Escape Review Meeting. Or as some like to call it, a “Blame Review Meeting”. I can’t help but feel empathy for one of the testers who felt a bit…blamed.
With each production bug, we ask, “Could we do something to catch bugs of this nature?”. The System 1 response is “no, way too difficult to expect a test to have caught it”. But after 5 minutes of discussion, the System 2 response emerges, “yes, I can imagine a suite of tests thorough enough to have caught it, we should have tests for all that”. Ouch, this can really start to weigh on the poor tester.
So what’s a tester to do?
- First, consider meekness. As counterintuitive as it seems, I believe defending your test approach is not going to win respect. IMO, there is always room for improvement. People respect those who are open to criticism and new ideas.
- Second, entertain the advice but don’t promise the world. Tell them about the Orange Juice Test (see below).
The Orange Juice Test is from Jerry Weinberg’s book, The Secrets of Consulting. I’ll paraphrase it:
A client asked three different hotels to supply said client with 700 glasses of fresh squeezed orange juice tomorrow morning, served at the same time. Hotel #1 said “there’s no way”. Hotel #2 said “no problem”. Hotel #3 said “we can do that, but here’s what it’s going to cost you”. The client didn’t really want orange juice. They picked Hotel #3.
If the team wants you to take on new test responsibilities or coverage areas, there is probably a cost. What are you going to give up? Speed? Other test coverage? Your kids? Make the costs clear, let the team decide, and there should be no additional pain on your part.
Remember, you’re a tester, relax.
One of my tester colleagues and I had an engaging discussion the other day.
If a test failure is not caused by a problem in the system-under-test, should the tester bother to say the test failed?
My position is: No.
If a test fails but there is no problem with system-under-test, it seems to me it’s a bad test. Fix the test or ignore the results. Explaining that a test failure is nothing to be concerned with, gives the project team a net gain of nothing. (Note: If the failure has been published, my position changes; the failure should be explained).
The context of our discussion was the test automation space. I think test automaters, for some reason, feel compelled to announce automated check failures in one breath, and in the next, explain why these failures should not matter. “Two automated checks failed…but it’s because the data was not as expected, so I’m not concerned” or “ten automated checks are still failing but it’s because something in the system-under-test changed and the automated checks broke…so I’m not concerned”.
My guess is, project teams and stakeholders don’t care if tests passed or failed. They care about what those passes and failures reveal about the system-under-test. See the difference?
Did the investigation of the failed test reveal anything interesting about the system-under-test? If so, share what it revealed. The fact that the investigation was triggered by a bad test is not interesting.
If we’re not careful, Test Automation can warp our behavior. IMO, a good way of understanding how to behave in the test automation space, is to pretend your automated checks are sapient (AKA “manual”) tests. If a sapient tester gets different results than they expected, but later realizes their expectations were wrong, they don’t bother to explain their recent revelation to the project team. A sapient tester would not say, “I thought I found a problem, but then I realized I didn’t”? Does that help anyone?
My system 1 thinking says “no”. I’ve often heard separation of duties makes testers valuable.
Let’s explore this.
A programmer and a tester are both working on a feature requiring a complex data pull. The tester knows SQL and the business data better than the programmer.
If Testers Write Source Code:
The tester writes the query and hands it to the programmer. Two weeks later, as part of the “testing phase”, the tester tests the query (they wrote themselves) and finds 0 bugs. Is anything dysfunctional about that?
If Testers do NOT Write Source Code:
The programmer struggles but manages to cobble some SQL together. In parallel, the tester writes their own SQL and puts it in an automated check. During the “testing phase”, the tester compares the results of their SQL with that of the programmer’s and finds 10 bugs. Is anything dysfunctional about that?
After failing in production for a third time, the team lead’s passive aggressive tendencies became apparent in his bug report title. Can you blame him?
It all depends on context, of course. But if three attempts to get something working in production still fail…there may be a larger problem somewhere.
That got me thinking. Maybe we should add passive aggressive suffixes for all our “escapes” (bugs not caught in test). It would serve to embarrass and remind ourselves that we can do better.
- “…fail I” would not be so bad.
- “…fail II” would be embarrassing.
- “…fail III” should make us ask for help testing and coding.
- “…fail IV” should make us ask to be transferred to a more suitable project.
- by “…fail V” we should be taking our users out to lunch.
- “…fail VI” I’ve always wanted to be a marine biologist, no time like the present.
“I’m just the tester, if it doesn’t run it’s not my problem, it’s the deployment team’s problem. I can tell you how well it will work, but first you’ve got to deploy it properly.”
One of the most difficult problems to prevent is a configuration problem; a setting that is specific to production. You can attempt perfect testing in a non-production environment, but as soon as your Config Management guys roll it out to prod with the prod config settings, the best you can do is cross your fingers (unless you’re able to test in prod).
After a recent prod server migration, my config management guys got stuck scrambling around trying to fix various prod config problems. We had all tested the deployment scripts in multiple non-prod environments. But it still didn’t prepare us for the real thing.
It’s too late for testers to help now.
I’ve been asking myself what I could have done differently. The answer seems to be, asking/executing more hypothetical questions/tests, like:
- If this scheduled nightly task fails to execute, how will we know?
- If this scheduled nightly task fails to execute, how will we recover?
But often I skip the above because I’m so focused on:
- When this scheduled nightly task executes, does it do what it’s supposed to do?
The hypotheticals are difficult to spend time on because we, as testers, feel like we’re not getting credit for them. We can’t prevent the team from having deployment problems. But maybe we can ask enough questions to prepare them for the bad ones.
When bugs escape to production, does your team adjust?
We started using the following model on one of my projects. It appears to work fairly well. Every 60 days we meet and review the list of “escapes” (i.e., bugs found in production). For each escape, we ask the following questions:
- Could we do something to catch bugs of this nature?
- Is it worth the extra effort?
- If so, who will be responsible for said effort?
The answer to #1 is typically “yes”. Creative people are good at imagining ultimate testing. It’s especially easy when you already know the bug. There are some exceptions though. Some escapes can only be caught in production (e.g., a portion of our project is developed in production and has no test environment).
The answer to #2 is split between “yes” and “no”. We may say “yes” if the bug has escaped more than once, significantly impacts users, or when the extra effort is manageable. We may say “no” when a mechanism is in place to alert our team of the prod error; we can patch some of these escapes before they affect users, with less effort than required to catch them in non-prod environments.
The answer to #3 falls to Testers, Programmers, BAs, and sometimes both or all.
So…when bugs escape to production, does my team adjust? Sometimes.
Test Automation Scrum Meeting Ambiguity
4 comments Posted by Eric Jacobson at Thursday, April 12, 2012For those of you writing automated checks and giving scrum reports, status reports, test reports, or some other form of communication to your team, please watch your language…and I'm not talking about swearing.
You may not want to say, “I found a bunch of issues”, because sometimes when you say that, what you really mean is, “I found a bunch of issues in my automated check code” or “I found a bunch of issues in our product code”. Please be specific. There is a big difference and we may be assuming the wrong thing.
If you often do checking by writing automated checks, you may not want to say, “I’m working on FeatureA”, because what you really mean is “I’m writing the automated checks for FeatureA and I haven't executed them or learned anything about how FeatureA works yet” or “I’m testing FeatureA with the help of automated checks and so far I have discovered the following…”
The goal of writing automated checks is to interrogate the system under test (SUT), right? The goal is not just to have a bunch of automated checks. See the difference?
Although your team may be interested in your progress creating the automated checks, they are probably more interested in what the automated checks have helped you discover about the SUT.
It’s the testing, stupid. That’s why we hired you instead of another programmer.
I believe testers have the power to either slow down the rate of production deployments or speed them up, without adversely affecting their testing value.
- My test career began as a “Quality Cop”. I believed a large responsibility of my job was preventing things from going to production.
- After talking Michael Bolton’s Rapid Software Testing class, I stopped trying to assure quality, stopped being the team bottleneck, and became a tester. At this point I was indifferent to what and when things went to production. I left it in the hands of the stakeholders and did my best to give them enough information to make their decision obvious.
- Lately, I’ve become an “Anti-bottleneck Tester”. I think it’s possible to be an excellent tester, while at the same time, working to keep changes flowing to production. It probably has something to do with my new perspective after becoming a test manager. But I still test a considerable amount, so I would like to think I’m not completely warped in the head yet.
Tell me if you agree. The following are actions testers can do to help things flow to production quicker.
- When you’re testing new FeatureA and you find bugs that are not caused by the new code (e.g., the bug exists in production), make this clear. The bug should probably not slow down FeatureA’s prod deployment. Whether it gets fixed or not should probably be decoupled from FeatureA’s path. The tester should point this out.
- Be a champion of flushing out issues before it hits the programmer’s desk. Don’t get greedy and keep them to yourself. Don’t think, “I just came up with an awesome test, I know it’s going to fail!”. No no no tester! Bad tester! Don’t do this. Go warn somebody before they finish coding.
- Be proactive with your test results. Don’t wait 4 days to tell your stakeholders what you discovered. Tell them what you know today! You may be surprised. They may say, “thanks, that’s all we really needed to know, let’s get this deployed”.
- Help your programmers focus. Work with them. I’m NOT talking about pair programming. When they are ready for you to start testing, start testing! Give them immediate feedback, keep your testing focused on the same feature. Go back and forth until you’re both done. Then wrap it up and work on the next one… together. When possible, don’t multi-task between user stories.
- Deployments are something to celebrate, not fear. This relates more to Kanban than Scrum. If you have faith in your testing then don’t fear deployments. We have almost daily deployments on my Kanban project now. This has been a huge change for testers who are used to 4 week deployments. Enthusiastic testers who take pride in rapid deployments can feel a much needed sense of accomplishment and spread the feeling to the rest of the team.
- Don’t waste too much time on subjective quality attributes. Delegate this testing to users or other non-testers who may be thrilled to help.
- Don’t test things that don’t need testing. See my Eight Things You May Not Need To Test post.
Every other development team is running around whining “we’re overworked”, “our deadlines are not feasible”. Testers have the power to influence their team’s success. Why not use it for the better?
Last week we celebrated two exciting things on one of my project teams:
- Completing our 100th iteration (having used ScrumBut for most of it).
- Kicking off the switch to Kanban.
Two colleagues and I have been discussing the pros and cons of switching to Kanban for months. After convincing ourselves it was worth the experiment, we slowly got buy-in from the rest of the project team and…here we go!
Why did we switch?
- Our product’s priorities change daily and in many cases users cannot wait until the iteration completes.
- Scrum came with a bunch of processes that never really helped our team. We didn’t need daily standups, we didn’t like iteration planning, we spent a lot of time breaking up stories and arguing about how to calculate iteration velocity. We ran out of problems to discuss in retrospectives and in some cases (IMO) forced ourselves to imagine new ones just to have something to discuss.
- We’re tired of fighting the work gaps at the start and end of iterations (i.e., testers are bored at the iteration start and slammed at the end, programmers are bored at the iteration end and slammed at the start).
- Deploying entire builds, filled with lots of new Features forced us to run massive regression tests, and deploy on weekends, during a maintenance window (causing us to work weekends, and forcing our users to wait for Features until weekends).
- Change is intellectually stimulating. This team has been together for 6 years and change may help us to use our brains again to make sure we are doing things for the right reasons. One can never know if another approach works better unless one tries it.
As I write this, I can hear all the Scrum Masters crying out in disgust, “You weren’t doing Scrum correctly if it didn’t work!” That’s probably true. But I’ll give part of the blame to the Scrum community, coaches, and consultants. I think you should strive to do a better job of explaining Scrum to the software development community. I hear conflicting advice from smart people frequently (e.g., “your velocity should go up with each iteration”, “your velocity should stay the same with each iteration”, “your velocity should bounce around with each iteration”).
When I was a young kid, my family got the game “Video Clue”. We invited my grandpa over to play and we all read through the instructions together. After being confused for a good 30 minutes, my grandpa swiped the pieces off the table and said, “anything with this many rules can’t possibly work”.
Anybody else out there using Kanban?
Tester Pride Without Bug Discovery
5 comments Posted by Eric Jacobson at Wednesday, February 08, 2012You don’t need bugs to feel pride about the testing service you provide to your team. That was my initial message for my post, Avoid Trivial Bugs, Report What Works. I think I obscured said message by covering too many topics in that post so I’ll take a more focused stab at said topic.
Here is a list of things we (testers) can do to help feel pride in our testing when everything works and we have few to no bugs to report. Here we go…
- Congratulate your programmers on a job well done. Encourage a small celebration. Encourage more of the same by asking what they did differently. Feel pride with them and be grateful to be a part of a successful team.
- If you miss the ego boost that accompanies cool bug discovery, brag about your coolest, most creative, most technical test. You were sure the product would crash and burn but to your surprise, it held up. Sharing an impressive test is sometimes enough to show you’ve been busy.
- Give more test reports (or start giving them). A test report is a method of summarizing your testing story. You did a lot. Share it.
- Focus on how quickly whatever you tested has moved from development to production. Your manager may appreciate this even more than the knowledge that you found a bunch of bugs. Now you can test even more.
- Start a count on a banner or webpage that indicates how many days your team has gone without bugs.
- If the reason you didn’t find bugs is because you helped the programmer NOT write bugs from the beginning, then brag about it in your retrospective.
- Perform a “self check”; ask another team member to see if they can find any bugs in your Feature. If they can’t find bugs, you can feel pride in your testing. If they can find bugs, you can feel pride in the guts it took to expose yourself to failure (and learn another test idea).
What additions can you think of?
Which is More Important? Knowing What Works or Finding Bugs?
6 comments Posted by Eric Jacobson at Wednesday, February 01, 2012- "Instead of figuring out what works, they are stuck investigating what doesn’t work.”
Ilya asked:
Why did you use "stuck" referring to context of the other testers? Isn't "investigating what doesn’t work" more important than "figuring out what works" (other factors being equal)?
I love that question. It really made me think. Here is my answer:
- If stuff doesn’t work, then investigating why it doesn’t work may be more important than figuring out what works.
- If we’re not aware of anything that is broken, then figuring out what else works (or what else is not broken) is more important than investigating why something doesn’t work…because there is nothing broken to investigate.
When testers spend their time investigating things that don’t work, rather than figuring out what does work, it is less desirable than the opposite. Less desirable because it means we’ve got stuff that doesn’t work! Less desirable to who? It is less desirable for the development team. It means there are problems in the way we are developing software.
An ultimate goal would be bug free software, right? If skilled testers are not finding any bugs, and they are able to tell the team how the software appears to work, that is a good thing for the development team. However, it may be a bad thing for the tester.
- Many testers feel like failures if they don’t have any issues to investigate.
- Many testers are not sure what to do if they don’t have any issues to investigate.
- If everything works, many testers get bored.
- If everything works, there are fewer hero opportunities for many testers.
I don’t believe things need to be that way. I‘m interested in exploring ways to have hero moments by delivering good news to the team. It sounds so natural but it isn’t. As a tester, it is soooooo much more interesting to tell the team that stuff just doesn’t work. Now that’s dysfunctional. Or is it?
And that is the initial thought that sparked my Avoid Trivial Bugs, Report What Works post.
Thanks, Ilya, for making me think.
Eight Things You May Not Need To Test
3 comments Posted by Eric Jacobson at Friday, January 20, 2012This article will be published in a future addition of the Software Test Professionals Insider – community news. I didn’t get a chance to write my blog post this week so I thought I would cheat and publish it on my own blog first.
I will also be interviewed about it on Rich Hand’s live Blog Talk Radio Show on Tuesday, January 31st at 1PM eastern time.
My article is below. If it makes sense to you or bothers you, make sure you tune in to the radio show to ask questions…and leave a comment here, of course.
Don’t Test It
As testers, we ask ourselves lots of questions:
- What is the best test I can execute right now?
- What is my test approach going to be?
- Is that a bug?
- Am I done yet?
But how many of us ask questions like the following?
- Does this Feature need to ever be tested?
- Does it need to be tested by me?
- Who cares if it doesn’t work?
In my opinion, not enough of us ask questions like the three above. Maybe it’s because we’ve been taught to test everything. Some of us even have a process that requires every Feature to be stamped “Tested” by someone on the QA team. We treat testing like a routine factory procedure and sometimes we even take pride in saying...
“I am the tester. Therefore, everything must be tested...by me...even if a non-tester already tested it...even if I already know it will pass...even if a programmer needs to tell me how to test it...I must test it, no exceptions!”
This type of thinking may be giving testers a bad reputation. It emphasizes testing importance because of a thoughtless process rather than a service to provide the most valuable information to someone.
James Bach came up with the following test execution heuristic:
Basic Heuristic: “If it exists, I want to test it”
I disagree with that heuristic, as it is shown above and often published. However, I completely agree with the full version James published when he introduced it in his 7/8/2006 blog post:
“If it exists, I want to test it. (The only exception is if I have something more important to do.)”
The second sentence is huge! Why? Because often we do have something more important to do, and it’s usually another test! Unfortunately, importance is not always obvious. So rather than measuring importance, I like to ask the three questions above and look for things that may not be worth my time to test. Here are eight examples of what I’m talking about:
- Features that don’t go to production - My team has these every iteration. These are things like enhancements to error logging tables or audit reports to track production activity. On Agile teams these fall under the umbrella of Developer User Stories. The bits literally do not go to production and by their nature cannot directly affect users.
- Patches for critical production problems that can’t get worse - One afternoon our customers called tech support indicating they were on the verge of missing a critical deadline because our product had a blocking bug. We had one hour to deliver the fix to production. The programmer had the fix ready quickly and the risk of further breaking production was insignificant because production was currently useless. Want to be a hero? Don’t slow things down. Pass it through to production. Test it later if you need to.
- Cosmetic bug fixes with timely test setup - We fixed a spelling mistake that had shown up on a screen shot of a user error message. The user was unaware of the spelling mistake but we fixed it anyway; quick and easy. Triggering said error message required about 30 minutes of setup. Is it worth it?
- Straight forward configuration changes - Last year our product began encountering abnormally large production jobs it could not process. A programmer attempted to fix the problem with an obvious configuration change. There was no easy way to create a job large enough to cross the threshold in the QA environment. We made the configuration change in production and the users happily did the testing for us.
- Too technical for a non-programmer to test - Testing some functionality requires performing actions while using breakpoints in the code to reproduce race conditions. Sometimes a tester is no match for the tools and skills of a programmer with intimate knowledge of the product code. Discuss the tests but step aside.
- Non-tester on loan - If a non-tester on the team is willing to help test, or better yet, wants to help test a certain Feature, take advantage of it. Share test ideas and ask for test reports. If you’re satisfied, don’t test it.
- No repro steps - Occasionally a programmer will take a stab at something. There are often errors reported for which nobody can determine the reproduction steps. We may want to regression test the updated area, but we won’t prevent the apparent fix from deploying just because we don’t know if it works or not.
- Inadequate test data or hardware - Let’s face it. Most of us don’t have as many load balanced servers in our QA environment as we do in production. When a valid test requires production resources not available outside of production, we may not be able to test it.
Many of you are probably trying to imagine cases where the items above could result in problems if untested. I can do that too. Remember, these are items that may not be worth our time to test. Weigh them against what else you can do and ask your stakeholders when it’s not obvious.
If you do choose not to test something, it’s important not to mislead. Here is the approach we use on my team. During our Feature Reviews, we (testers) say, “we are not going to test this”. If someone disagrees, we change our mind and test it. If no one disagrees, we “rubber stamp” it. Which means we indicate nothing was tested (on the work item or story) and pass it through so it can proceed to production. The expression “rubber stamping” came from the familiar image of an administrative worker rubber stamping stacks of papers without really spending any time on each. The rubber stamp is valuable, however. It tells us something did not slip through the cracks. Instead, we used our brains and determined our energy was best used elsewhere.
So the next time you find yourself embarking on testing that feels much less important than other testing you could be doing, you may want to consider...not testing it. In time, your team will grow to respect your decision and benefit from fewer bottlenecks and increased test coverage where you can actually add value.