Here are two metrics that you can look at with your unit tests:
- Percentage of test failures that are only a problem with the unit test, not a problem with the code running in a real environment. This often happens when you are mocking interfaces.
- Percentage of times that a bug fix or otherwise non breaking change requires an update to a unit test.
The goal would be to have both these percentages at 0%. This would indicate that every test failure represents a real issue in your code and that only feature changes require updates to the unit tests.
As these percentages approach 100% the less helpful your unit tests become. However what percentage should act as the cut for when your unit tests are a net positive verse when they are a net negative? We can make two assumptions:
- Every test failure that reflects a real code issue exactly out weighs every false positive.
- Every unit test update that comes from a non feature change is out weighed by the benefits of having a unit test that comes from a feature change.
With these two assumptions in place then 50% for each of the metrics is the cut off for when your unit tests become a net negative. These are broad assumptions that might not always be true, but would need determined on a project by project basis based upon the technology that is in place. However it does provide a framework for thinking about the utility of unit testing.
Check out my blog for more of my musings upon technology and various other topics.
Top comments (0)