Question: What metric or easily understood information can my test team provide users, to show our contribution to the software we release?
I just got back from vacation and am looking at a beautiful pie chart that shows the following per iteration:
- # of features delivered
- # of bugs found in test vs. prod
- # of bugs fixed
- # of test cases executed
First, I feel uncomfortable measuring tester value based on test case count, for obvious reasons.
Second, the pie chart looks like all we do is test. One slice lists 400 tests. Another lists 13 features...strange juxtaposition.
Third, I’m not even sure how to provide said count. I certainly don’t encourage my test team to exhaustively document their manual test cases, nor do I care how many artifacts they save distinct tests within. Do I include 900+ automated UI test executions? Do I include thousands more unit test executions? Does the final # speak to users about quality? Does it represent how effective testers are? Not to me. Maybe it does to users...
PR is important, especially when your reputation takes a dive. I, too, want to show the users how hard my QA team works. I want to show it in the easiest possible way. I could provide a long list of tests, but they don't want to read that. What am I missing? What metric or easily understood information can my test team provide users, to show our contribution to the software we release?