3 comments Posted by Eric Jacobson at Tuesday, February 22, 2011
Have you seen this?
Type “psr” in your Windows 7 run prompt.
Click “Start Record” and every input you perform will get described, along with screen shots, then packaged into an MHTML zip file. The file can be viewed in IE or just as raw XML (for more details like mouse click coordinates). The little “Add Comment” feature is a good way to embed Expected vs. Actual results.
I used Problem Steps Recorder (PSR) today, to add details to one of my bug reports. PSR may also work as a personal test tracking tool. IMO, we can do a better job of capturing the relevant info ourselves in most cases. I rarely capture my entire screen and normally don't bother capturing steps unrelated to issues. Nevertheless, it's a good tool for your toolbox.
We encountered the following fixed bug today. Call it Bug100.
“The word ‘Unknown’ is spelled incorrectly in the dialog the user sees when sending an error report.”
Bug100 was spun off the larger issue (i.e., why the error was thrown). Bug100 was logged by a BA after noticing the spelling error, and easily fixed by a developer.
Unfortunately, the developer could not determine an easy way to trigger the error message described by Bug100. After a few failed attempts by the tester, the tester and I had a brief discussion. We decided to rubber stamp Bug100 and spend our time elsewhere. “Rubber stamp” is the expression we use to describe situations where the tester does not really do any testing, but they still move the bug report to the “Tested” status so its fix can proceed to production. We make a note on the bug report that says nothing was tested.
Would you have bothered to test this bug fix? Use the voting buttons below.
Would you bother testing this fix?
5 comments Posted by Eric Jacobson at Friday, February 11, 2011
I promised I would post about my Lightning Talk, Programmer Profiling, per my Our First Tester Lightning Round post.
I came up with the notion of programmer profiling after listening to the Intelligence Squared podcast called “U.S. AIRPORTS SHOULD USE RACIAL AND RELIGIOUS PROFILING”
The TSA is responsible for finding bombs among some 3 million people participating in 20 thousand US airline flights per day. The TSA takes heavy criticism from people who believe racial profiling is wrong. But the TSA also takes heavy criticism for the opposite, searching little old ladies and children. Some firmly believe racial and religious profiling is one approach that should be on the table. Based on prior terrorist attempts, searching old ladies may not be the best use of the TSA’s time.
I noticed some vague similarities between the TSA and software testers.
- The TSA protects passengers by finding bombs among 3 million people.
- I protect users by finding bugs among 3 million lines of code.
Then I realized I already practice my own form of profiling to determine which areas may need more of my test attention. I profile the programmers. Not just based on their prior code quality history, but also based on their current behaviors.
- If I ask ProgrammerA how she tested something and she shows me a set of sound unit tests, and a little custom application she wrote to help her test better, I gain a certain level of confidence in her code.
- On the other hand, if I ask ProgrammerB how she tested something and she shrugs and says “that’s your job”, I gain a different level of confidence in her code.
When the clock is ticking, and all other things are equal, where do you think I’ll focus my time?
As is the case with potential TSA racial profiling, it should only be used as one of many approaches to finding the problems. It should be balanced with other considerations. But perhaps it should be on the table.
I call this “Programmer Profiling” and I think testers should not be afraid to use it.
Data warehouse (DW) testing is a far cry from functional testing. As testers, we need to let the team know if the DW dimension, fact, and bridge tables are getting the right data from all the source databases, storing it in such a way as to allow users to build reports, and keeping it current.
We are using automated tests as one of our test approaches. Contrary to popular belief, we found DW testing to be an excellent place to build and reuse automated tests. We came up with a simple automated test template that is allowing testers who have never done test automation to build up a pretty sweet library of automated tests.
Here's how it works:
Our automation engineer built us a test with two main parameters we pass in; a SQL statement for source data and a SQL statement for target data. The test compares the two record sets, including all data values and asserts they are equal. As the programmers build the various DW dimension, fact, and bridge tables, the testers copy and paste said automated test, and swap out the SQL with their SQL for the current test subject.
One of our most important tests is to rebuild the programmer's SQL ourselves, which is a manual process. If the tester's datasets don't match the DW, there may be a problem. This is where most of the defects are found. As we manually build our SQL, we plug it into an automated test that actually compares every record and column data value on millions of rows, and tells us where the differences are. The byproduct is an automated regression test.
Sure, rebuilding much of what the programmers did is challenging. We've had to learn case statements, cross DB joins, data type casting, and keep our heads straight while working through queries that include 30 or more joins. But normally it's fun and easy to get help from Google or our programmers.
We've been getting one or more automated tests completed for each dimension, fact, or bridge table; sometimes as early as in the development environment. When the DW is deployed to various environments, we normally just execute our automated tests and interpret the results. It's quite beautiful.
I’ll blog about some of the other tests we execute. There appears to be a shortage of pragmatic DW test ideas out there.