Who tests the automated tests? Well, we know the test automation engineer does. But per our usual rationale, shouldn’t someone other than the test automation engineer test the automated tests? After all, they are coded…in some cases by people with less programming experience than the product programmers.
We’re experimenting with manual testers testing automated tests on one of my project teams. The test automation engineer hands each completed automated test to a manual tester. The manual tester then executes the automated test. At this point the manual tester is testing two things at the same time. They are:
- Testing the automated test and
- testing whatever the automated test tests
Or, to be more precise, we can use Michael Bolton speak and say the tester is:
- Testing the automated check and
- checking whatever the automated check checks
Whatever you call it, during this exercise, it’s important to distinguish the above two activities. If the automated test's execution results in a “Fail”, it doesn’t mean the test of the automated test fails…but it may. Are you with me still? An automated test’s execution result of “Fail” may, in fact, mean the automated test is a damn good test. It may have found a bug in the product under test. But that is completely up to the manual tester who is testing the automated test. One cannot trust the expected result of an automated test until one has finished testing the automated test.
Thus, the tester of the automated test will need to evaluate the test somehow and declare it to be a good test. They may be able to do this several ways:
- Manipulate the product to see if the test both passes and fails under the right conditions.
- Execute the automated test by itself then as part of the test suite to determine if the setup/teardown routines adapt sufficiently.
- Read the automated test’s code (e.g., does the Assert check the intended observation correctly?).
- Manually test the same thing the automated test checks.
- Manipulate the product such that the test cannot evaluate its check. Does the test resolve as “Inconclusive”?
It would be nice to have the luxury of time/resources to test the automated checks thoroughly but in the end, I suspect we will have to draw the line somewhere and trust the test automation engineer’s test of their check. In the meantime, we’ll see where this gets us.
Good article. I always want to ask this for automated (and manual) checks.
I try to validate the test framework (usually written in ruby with a library such as Mechanize or Watir-Webdriver) in IRB before I create test scripts with it. Because of that, I get a solid chance to see the inputs and outputs repeatedly, and continuously ask "What if I do this?". I believe that when I am ready to create the test scripts, the framework is vetted.
Another option for verifying the test is to ask "What should it be checking to make sure it's passing?". Sometimes I will add several more criteria checks than I had in the manual test that I am replacing (or would have written) because check can be much quicker than the manual tester could have done. I may utilize a database library to check what's in the databse. I may make a webservice call to another system that integrates by receiving data based on the action just done. I may check the status of the server via log parsing.
Of course there is always the option of having unit tests for the test framework, incorporating mocks to create positive and negative situations.
Thanks for raising the question. Here is the same idea: http://blog.zenspider.com/2012/01/assert-nothing-tested.html
Nice...