I understand that one of the main benefits of unit tests is that code changes that break things are very visible right when the critical change is made. This applies to nearly any code of all sorts. It would even seem to apply for the tests themselves. Should I then write tests to test that my tests succesfully test? Since this is inherently recursive, how do I know when to stop?
Should I then write tests to test that my tests
No. There are things you can do that will help you tend toward valid tests.
If you literally write the test first - and thus is fails, when you write the target code and the test passes - you know it was the code under test that caused the passing test.
If you test and write incrementally, testing evolves in simpler, shorter steps which in the aggregate will tend to be correct.
The test and the code under test tend to be somewhat mutually validating if there is reasonable coverage. If your tests fail when expected and pass when expected; if the tests have a reasonable breadth - covers edge cases - and they work as expected.
Ditto for depth. Well tested "low level/core" code means, un-intuitively, that high-level code can have fewer and simpler tests than you might expect.
Asserting the initial conditions helps ensure a valid test condition. For example a Sort routine: I will test that the list is initially not sorted. If it is sorted after, I know it worked.
If you output a message on test failure: "Wrong answer. IsTestingUseful was 'false', expected 'false' " - oops. something don't look right here.
There is one possible strategy to test the tests: voluntarily introduce bugs in the code and check if the set of tests detect the bug or not - this technic is generally named mutation-testing.
For example, a framework could modify the target code (the one being tested by the unit tests), like changing some + in - or some logical and into logical or, etc...
This strategy will determine if the tests have sufficient coverage, not in term of tested function, line of code, block of code, or MC-DC, but rather semantically.
An example of such framework for Smalltalk is MuTalk see https://code.google.com/p/mutalk/ but I'm pretty sure that equivalent frameworks exist for other languages - see the wikipedia page https://en.wikipedia.org/wiki/Mutation_testing.
But in such case, you don't really write tests to test the tests, you use a framework to analyse the completeness of tests.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With