…are not always the full truth.  Is that hurting our craft? 

Last week, I attended the first Software Testing Club Atlanta Meetup.  It was organized by Claire Moss and graciously hosted by VersionOne.  The format was Lean Coffee, which was perfect for this meeting.

20131015_184230

Photo by Claire Moss

I’m not going to blog about the discussion topics themselves.  Instead, I would like to blog about a familiar Testing Story pattern I noticed:

During the first 2 hours, it seemed to me, we were telling each other the testing stories we wanted to believe, the stories we wanted each other to believe.  We had to make first impressions and establish our personal expertise, I guess.  But during the 3rd hour, we started to tell more candid stories, about our testing struggles and dysfunctions.  I started hearing things like, “we know what we should be doing, we just can’t pull it off”.  People who, at first impression, seemed to have it all together, seemed a little less intimidating now.

When we attend conference talks, read blog posts, and socialize professionally, I think we are in a bubble of exaggerated success.  The same thing happens on Facebook, right?  And people fall into a trap: The more one uses Facebook, the more miserable one feels.  I’m probably guilty of spreading exaggerated success on this blog.  I’m sure it’s easier, certainly safer, to leave out the embarrassing bits.

That being said, I am going to post some of my recent testing failure stories on this blog in the near future.  See you soon.

My data warehouse project team is configuring one of our QA environments to be a dynamic read-only copy of production.  I’m salivating as I try to wrap my head around the testing possibilities.

We are taking about 10 transactional databases from one of our QA environments, and replacing them with 10 databases replicated from their production counterparts.  This means, when any of our users perform a transaction in production, said data change will be reflected in our QA environment instantly.

Expected Advantages:

  • Excellent Soak Testing – We’ll be able to deploy a pre-production build of our product to our Prod-replicated-QA-environment and see how it handles actual production data updates.  This is huge because we have been unable to find some bugs until our product builds experience real live usage.
  • Use real live user scenarios to drive tests – We have a suite of automated checks that invoke fake updates in our transactional data bases, then expect data warehouse updates within certain time spans.  The checks use fake updates.  Until now.  With the Prod-replicated-QA-environment, we are attempting to programmatically detect real live data updates via logging, and measure those against expected results.
  • Comparing reports – A new flavor of automated checks is now possible.  With the Prod-replicated-QA-environment, we are attempting to use production report results as a golden master to compare to QA report results sitting on the pre-production QA build data warehouse.  Since the data warehouse data to support the reports should be the same, we can expect the report results to match.

Expected Challenges:

  • The Prod-replicated-QA-environment will be read-only.  This means instead of creating fake user actions whenever we want, we will need to wait until they occur.  What if some don’t occur…within the soak test window?
  • No more data comparing? - Comparing transactional data to data warehouse data has always been a bread and butter automated check we’ve performed.  These checks check data integrity and data loading.  Comparing a real live quickly changing source to a slowly updating target will be difficult at best.



Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.