I sent an email to a dev today, asking why his new sort-by-column functionality for a grid was not actually sorting the values properly. For example: an ascending sort would rearrage alphanumeric data to something like "A", "B", "C", "A", "S", B".
Shortly thereafter the dev stopped by my desk and explained, "It sorts. It’s just not perfect."
I know it looks better on tester resumes to emphasize one’s White Box Testing abilities and brag about how many bugs you caught before they manifested on the UI. It also serves for far more condescending trash talk amongst testers. But since the majority of the testing I do is manual Black Box Testing, I often feel depressed, wondering if I am inferior to my testing peers and fellow bloggers.
The other day something occurred to me… Black Box testing is actually more challenging than White Box Testing. That is, if it is good Black Box testing.
I’m testing a winform app, that at any given time, may have about 6 different panes or zones displaying. The bulk of the user paths require drag/drop between various zones into grids. The possible inputs are nightmarish compared to those of the underlying services. Determining bug repro steps takes creativity, patience, and lots of coffee. Communicating those repro steps to others takes solid writing skills or in-person demos. And predicting what a user may do is far more challenging than predicting how one service may call another service.
I’m not suggesting apps should be or can be tested entirely using a Black Box approach. But the fact is, no matter how much white box testing one does, the UI still needs to be tested from the user’s perspective.
So if you’re feeling threatened by all those smarty pants testers writing unit tests and looking down on the black box testers, don’t. Effective Black Box Testing is a highly skilled job and you should be proud of your testing abilities!
Labels: software testing career
I finally added Perlclip to my tray. I use it several times a week when I have to test text field inputs on my AUT. Among other things, this helpful little tool, created by James Bach, allows one to generate a text string with a specific character count. That alone is not very cool. However, the fact that the generated text string is comprised of numbers that indicate the count of the following astrisk is way cool.
Example: If I create a "counterstring" of 256 characters, I get a string on my clipboard that can be pasted. The last portion of the string looks like this...
Each number is telling you the character number of its following astrisk. Thus, the last astrisk is character #256. The last "6" is character #255. Get it? So if you don't have a boundary requirement for a text input field, just paste in something huge and examine the string that got saved. If the last portion of the saved string looks like this...
...your AUT only accepted the first 62 characters.
The first Agile Manifesto value is…
“Individuals and interactions over processes and tools”
While reading this recently, something radical occurred to me. If I practice said value to its fullest, I can stop worrying about how to document everything I test, and start relying on my own memory.
I hate writing test cases. But the best argument for them is to track what has been tested. If someone (or myself) asks me “Did you test x and y with z?”, a test case with an execution result seems like the best way to determine the answer. However, in practice, it is not usually the way I answer. The way I usually answer is based on my memory.
And that, my friends, is my breakthrough. Maybe it’s actually okay to depend on your memory to know what you have tested instead of a tool (e.g., Test Director). But no tester could ever remember all the details of prior test executions, right? True, but no tester could ever document all the details of prior test executions either, right? To tip the balance in favor of my memory being superior to my documentation skills, let me point out that my memory is free. It takes no extra time like documentation does. That means, instead of documenting test cases, I can be executing more test cases! And even if I did have all my test case executions documented, sometimes it is quicker to just execute the test on the fly than go hunt down the results of the previous run (quicker and more accurate).
It all seems so wonderful. Now if I can figure out how to use my memory for SOX compliancy... shucks.