We have a recurring conversation on both my project teams.  Some Testers, Programmers, BAs believe certain work items are “testable” while others are not.  For example, some testers believe a service is not “testable” until its UI component is complete.  I’m sure most readers of this blog would disagree.

A more extreme example of a work item, believed by some to not be “testable”, is a work item for Programmer_A to review Programmer_B’s code.  However, there are several ways to test that, right?

  • Ask Programmer_A if they reviewed Programmer_B’s code.  Did they find problems?  Did they make suggestions?  Did Programmer_B follow coding standards?
  • Attend the review session.
  • Install tracking software on Programmer_A’s PC that programmatically determines if said code was opened and navigated appropriately for a human to review.
  • Ask Programmer_B what feedback they received from Programmer_A.

IMO, everything is testable to some extent.  But that doesn’t mean everything should be tested.  These are two completely different things.  I don’t test everything I can test.  I test everything I should test.

I firmly believe a skilled tester should have the freedom to decide which things they will spend time testing and when.  In some cases it may make more sense to wait and test the service indirectly via the UI.  In some cases it may make sense to verify that a programmer code review has occurred.  But said decision should be made by the tester based on their available time and queue of other things to test.

We don’t need no stinkin’ “testable” flag.  Everything is testable.  Trust the tester.

Hey Testers, let’s start paying more attention to our bug language.  If we start speaking properly, maybe the rest of the team will join in.

Bug vs. Bug Report:

We can start by noting the distinction between a bug and a bug report.  When someone on the team says, “go write a bug for this”, what they really mean is “go write a bug report for this”.  Right?  They are NOT requesting that someone open the source code and actually write a logic error.

Bug vs. Bug Fix:

“Did you release the bug?”.  They are either asking “did you release the actual bug to some environment?” or “did you release the bug fix?”.

Missing Context:

“Did you finish the bug?”.  I hear this frequently.  It could mean “did you finish fixing the bug?” or it could mean “did you finish logging the bug report?” or it could mean “did you finish testing the bug fix?”.

Bug State Ambiguity:

“I tested the bug”.  Normally this means “I tested the bug fix.”  However, sometimes it means “I reproduced the bug.”…as in “I tested to see if the bug still occurs”.

 

It only takes an instant to tack the word “fix” or “report” onto the word “bug”.  Give it a try.

A fun and proud moment for me.  Respected tester, Matt Heusser, interviewed me for his This-Week-In-Software-Testing podcast on Software Test Professionals.  It was scary because there was no [Backspace] key to erase anything I wished I hadn’t said.

I talked a bit about the transition from tester to test manager, what inspires testers, and some other stuff.  It was truly an honor for me.

The four most recent podcasts are available free, although you may have to register for a basic (free) account.  However, I highly recommend buying the $100 membership to unlock all 49 (and counting) of these excellent podcasts.  I complained at first but after hearing Matt’s interviews with James Bach, Jerry Weinberg, Cem Kaner, and all the other great tester/thinkers, it was money well spent.  The production is top notch and listening to Matt’s testing ramblings on each episode is usually as interesting as the interview.  There are no podcasts available like these anywhere.

Keep up the great work Matt and team!  And keep the podcasts coming!

We all have different versions of our product on different environments, right?  For example: If Iteration 10 is in Production, Iteration 11 is in one or more QA environments. When bugs exist in both Iterations, we have BIMIs (Bugs-In-Multiple-Iterations).

One of my project teams just found a gap in our process that resulted in a BIMI hitching a ride all the way to production.  That means our users found a bug, we fixed it, and then our users found the same bug four weeks later (and then we fixed it again!).  Our process for handling bugs had always been to log one bug report per bug.  Here is the problem. 

  1. Let’s say we have a production bug (in Iteration 10).
  2. Said bug gets a bug report logged.
  3. Our bug gets fixed and tested in our post-production environment (i.e., test environment with same bits as production).
  4. Finally, the fix deploys to production and all is well, right?  The bug report status changes to “Closed”.
  5. Now we can get back to testing Iteration 11.

What did we forget?

…well it’s probably not clear from my narrative but our process gap is that the above bug fix code never got deployed to Iteration 11 and the testers didn’t test for it because “Closed” bugs are already fixed in prod, and thus, off the testers’ radar.

If our product was feasible to automate, we could have added a new test to our automation suite to cover this.  But in our mostly manual process, we have to remember to test for BIMIs.  The fact is, the same bug fix can be Verified in one iteration and Failed in another.  The bug can take a different path in each environment or, like in my case, fail to deploy to the expected environment at all.

This iteration we are experimenting with a solution.  For BIMIs, we are making a separate copy of the bug report and calling it a “clone”.  This may fly in the face of leaner documentation teams but we think it’s a good idea based on our history.

What’s your solution for making sure bug fixes make it into the next build?

Your mission is to test a new data warehouse table before its ETL process is even set up.  Here are some ideas:

Start with the table structure.  You can do this before the table even has data.

  • If you’ve got design specs, start with the basics;
    • Do the expected columns exist?
    • Are the column names correct?
    • Are the data types correct?
    • Is the null handling type correct?
  • Do the columns logically fit the business needs?  This was already discussed during design.  Even if you attended the design, you may know more now.  Look at each column again and visualize data values, asking yourself if they seem appropriate.  You’ll need business domain knowledge for this.
  • Build your own query that creates the exact same table and populates it with correct values.  Don’t look at the data warehouse source queries!  If you do, you may trick yourself into thinking the programmer must be correct.

Once you have data in your table you can really get going.

  • Compare the record set your query produced with that of the data warehouse table.  This is where 70% of your bugs are discovered.
    • Are the row counts the same?
    • Are the data values the same?  This is your bread and butter test.  This comparison should be done programmatically via an automated test so you can check millions of columns & rows (see my Automating Data Warehouse Tests post).  Another option would be to use a diff tool like DiffMerge.  A third option, just spot check each table manually.
  • Are there any interesting columns?  If so, examine them closely.  This is where 20% of your bugs are hiding.  This testing can not be automated.  Look at each column and think about the variety of record scenarios that may occur in the source applications; ask yourself if the fields in the target make sense and support those scenarios.
    • Columns that just display text strings like names are not all that interesting because they are difficult to screw up.  Columns that are calculated are more interesting.  Are the calculations correct?
    • Did the design specify a data type change between the source and the target?  Maybe an integer needed to be changed to a bit to simplify data…was it converted properly?  Do the new values make sense to the business?
  • How is corrupt source data handled?  Does the source DB have orphaned records or referential integrity problems?  Is this handled gracefully?  Maybe the data warehouse needs to say “Not Found” for some values.
  • Build a user report based on the new data warehouse table.  Do you have a trusted production report from the transactional DB?  Rebuild it using the data warehouse and run a diff between the two.

 

What am I missing?



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.