Our Data Warehouse uses Change Data Capture (CDC) to keep its tables current.  After collaborating with one of the programmers we came up with a pretty cool automated test template that has allowed our non-programmer testers to successfully write their own automated tests for CDC. 

We stuck to the same pattern as the tests I describe in my Automating Data Warehouse Tests post.  Testers can copy/paste tests and only need to update the SQL statements, parameters, and variables.  An example of our test is below.  If you can’t read C#, just read the comments (they begin with //).

BTW – if you want a test like this but are not sure how to write, just ask your programmers to write it for you.  Programmers love to help with this kind of thing.  In fact, they will probably improve upon my test.

Happy Data Warehouse Testing!


        public void VerifyCDCUpdate_FactOrder()
        {//get some data from the source
            var fields = DataSource.ExecuteReader(@"
SELECT     TOP (1) OrderID, OrderName
FROM         Database.dbo.tblOrder");
            var OrderID = (int)fields[0];
            var originalValue = (string)fields[1];

            //make sure the above data is currently in the Data Warehouse
            var DWMatch = new DataSource("SELECT OrderID, OrderName FROM FactOrder WHERE OrderID = @OrderID and OrderName = @OrderName",
                                              new SqlParameter("@OrderID", OrderID),
                                              new SqlParameter("@OrderName", originalValue));

            //fail test is data does not match.  This is still part of the test setup.
            DataSourceAssert.IsNotEmpty(DWMatch, "The value in the datawarehouse should match the original query");

                // Set a field in the source database to something else
                var newValue = "CDCTest";

                    @"UPDATE Database.dbo.tblOrder SET OrderName = @NewValue WHERE OrderID = @OrderID",
                    new SqlParameter("@NewValue", newValue),
                    new SqlParameter("@OrderID", OrderID));

                var startTime = DateTime.Now;
                var valueInDW = originalValue;
                while (DateTime.Now.Subtract(startTime).Minutes < 10)
                    // Verify the value in the source database is still what we set it to, otherwise the test is invalid
                    var updatedValueInSource = DataSource.ExecuteScalar<string>(@"SELECT OrderName FROM Database.dbo.tblOrder WHERE OrderID = @OrderID",
                            new SqlParameter("@OrderID", OrderID));

                    if (updatedValueInSource != newValue)
                        Assert.Inconclusive("The value {0} was expected in the source, but {1} was found.  Cannot complete test", newValue, updatedValueInSource);

                    //start checking the target to see if it has updated.  Wait up to 10 minutes (CDC runs every five minutes).  This is the main check for this test.  This is really what we care about.
                    valueInDW = DataSource.ExecuteScalar<string>(@"SELECT OrderName FROM FactOrder WHERE OrderID = @OrderID",
                                              new SqlParameter("@OrderID", OrderID));

                    if (valueInDW == newValue)
                if (valueInDW != newValue)
                    Assert.Fail("The value {0} was expected in DW, but {1} was found after waiting for 10 minutes", newValue, valueInDW);
                // Set the value in the source database back to the original
                // This will happen even if the test failes
                    @"UPDATE Database.dbo.tblOrder SET OrderName = @OriginalValue WHERE OrderID = @OrderID",
                    new SqlParameter("@OriginalValue", originalValue),
                    new SqlParameter("@OrderID", OrderID));

I find it belittling…the notion that everything must be tested by a tester before it goes to production.  It means we test because of a procedure rather than to provide information that is valuable to somebody.

This morning our customers submitted a large job to one of our software products for processing.  The processed solution was too large for our product’s output.  So the users called support saying they were dead in the water and on the verge of missing a critical deadline.  We had one hour to deliver the fix to production.

The fix, itself, was the easy part.  A parameter needed its value increased.  The developer performed said fix then whipped up a quick programmatic test to ensure the new parameter value would support the users’ large job.  Per our process, the next stop was supposed to be QA.  Given the following information I attempted to bypass QA and release the change straight to production:

  • Testers would not be able to generate a large enough job, resembling that in production, in the available time given.
  • There was no QA environment mirroring production bits and data at this time.  It would have been impossible to stand one up before the one hour deadline.
  • The risk of us breaking production by increasing said parameter was insignificant because production was already non-usable (i.e., it would be nearly impossible for this patch to make production worse than it already was).

Even with the above considerations, some on the team reacted with horror…”What? No Testing?”.  When I mentioned it had been tested by a developer and I was comfortable with said test, the response was still “A tester needs to test it”. 

After convincing the process hawks it was not feasible for a tester to test, our next bottleneck was deployment.  Some on the team insisted the bits go to a QA environment first, even though it would not be tested.  This was to keep the bits in sync across environments.  I agree with keeping the bits in sync, but how about worrying about that once we get our users safely through their crisis!

As I watched the email thread explode with process commentary and waited for the fix to jump through the hoops, I also listened to people who were in touch with the users.  The users were escalating the severity of their crisis and reminding us of its urgency.

I believe those who insist everything must be tested by a tester do us a dis-service by making our job a thoughtless process instead of a sapient service.

When I was 6 years old, I went bluegill fishing with my uncle Kevin in Wisconsin. Uncle Kev knew how to catch bluegill and every time he got a bite, he handed me his fishing pole and said, “Tug the line!” then “Reel it in, I think you got one!”

I’ve been coaching a member of our support staff on a testing assignment. When she completed her testing, I ran some follow-up tests of my own and found a bug. Rather than logging the bug, I gave her some suggested things to try and hinted that there may be a bug. She found it! What a thrill! She also got to log it. Cool!

What’s the lesson?
  • As with fishing, testing without catching bugs can get boring. It’s important to let newbies catch some bugs.
  • When someone finds a bug you missed, it’s humiliating. When the tables turn and you find the missed bug, maybe it’s time to set your ego aside and put your coaching hat on.

The users found a bug for something that was marked “PASSED” during the test cycle. I made this remark to my programmers and one of them said “Ha ha, testers just mark tests ‘passed’ without doing anything”.
He was joking but he raised a frightening thought. What if he were right? What if you gave testers a list of 100 tests and they just marked them PASSED without operating the product? How would you know? The sad truth is, 95 of those tests probably would have passed, had the product been operated, anyway. The other 5 could arguably have been tested under different conditions than those deemed to raise the bug. And this is what makes testing both so unsatisfying to some testers and so difficult to manage.
If you can get the same result doing nothing, why bother?
We have an on-going debate about working from home. Let a programmer work from home and at the end of the day you can see what code they checked in. You have something substantially more tangible than that of a tester (even if that tester takes detailed test notes). It’s not an issue of trust. It’s an issue of motivation.
I believe testers who are uninterested in their job can hide on test teams easier than other teams, especially if their managers don’t take the time to discuss the details of what they actually tested. They may be hiding because they’ve never gotten much appreciation for their work.
Having a conversation with a tester about what they tested can be as easy as asking them about a movie they just saw:
  • How did George Clooney do?
  • Was it slow paced?
  • Was the story easy to follow?
  • Any twists or unexpected events?
  • Was the cinematography to your liking?
If they didn’t pay much attention to the movie, you’ll know.

I’ve got a test automation engineer and a bunch of manual testers. The plan, of course, is to teach the manual testers to write automated tests were appropriate; a plan I have seen fail several times. I figured I would take a crack at it anyway. In the meantime, I’m trying to keep the automation engineer busy writing valuable tests.
Everybody knows, choosing the right tests to automate is tricky. But what I’ve really been struggling with is finding the right skillset to actually design (determine the detailed steps) of each test automation candidate.
It turns out, the test automation engineers suck at it, and so do the manual testers. Who knew?
You see, manual testers live in a dream world, where test setup can be customized on the fly, validation can be performed with whatever info the tester decides is important at that moment, and teardown can be skipped altogether. Try asking those manual testers to stub out some repeatable tests to automate. I suspect you will run into a problem.
The test community loves to poke fun at traditional, script-happy, manual testers. Some testers (myself included) talk about exploratory testing or test case fragments as being the cool way to test. They scoff at testers who use rigorous step-by-step test cases. Have you ever encountered said traditional testers? I certainly haven’t. I’m sure they exist somewhere. But I think most of them only exist in a myth we’ve created in order to feel like test innovators.
Why am I such a skeptic about these folks? The truth is, writing repeatable, detailed, step-by-step test cases is really really really hard. If you’ve attempted to automate end-to-end business facing test cases for a complex system, you’ll know exactly what I mean.
On the other side, the test automation engineers are bored with manual testing and obsessed with trying to achieve the near-impossible ROI goals imposed by management. They want the manual testers to write them detailed tests because they don’t have time to learn the AUT.
A tester who makes time for manual testing and test automation has the magic skillset necessary to be an effective test automation author. Each of us should strive to achieve this skillset.

Copyright 2006| Blogger Templates by GeckoandFly modified and converted to Blogger Beta by Blogcrowds.
No part of the content or the blog may be reproduced without prior written permission.