DEV Community

Cover image for The Problem With High Test Coverage

The Problem With High Test Coverage

Robert Ecker on November 27, 2017

Last week I attended Dan North’s workshop “Testing Faster”. Dan North is the originator of the term Behavior Driven Development (BDD). The whole wo...
Collapse
 
coolgoose profile image
Alexandru Bucur

The main problem that tests have is that they're usually written by programmers.

Most of the weird bugs that I've encountered are edge cases that customers trigger . Of course it's good to cover the bug then, and create a test case for that issue, but as you said 'coverage' is misleading in that case.

Collapse
 
lschultebraucks profile image
Lasse Schultebraucks

That is why you are writing your test in TDD before implementing the production code. So you keep sure that you do not look what the method does and write a test for that outcome. The result is then a more independent approach.
But I see your point, separate testers may detecting errors better than the developer who implemented the production code. On the other hand the may also just assert the outcome of the production code method.

Collapse
 
speedstream profile image
Aaron Santos

I realized that the best tester is the
client: Always find the way to break the code...
A week ago I did a code to encrypt files using OpenSSL. In order to create them, I need two files and a password. My function create two new files and uses them to create a final one. I checked everything, all weird validations, "what if..." cases, and asked all Support team what the common user does. More than 4 hours testing. Also a partner with more experience with the user tested my code. Aparently, everything was fine.
Well, the code only stayed in production for 24hrs... One client found a way to make it crash. 2hrs trying to figure out why aaannnddd, finally, we found it: He added manually the extension because (in his own words) "It does't has one" (Windows don't shows it). The only case we didn't consider because, usually, the user NEVER touch those files (one time every 5 years), and is less probable to modify them.
So, I conclude that it doesn't matter if you do tons of test cases, the user always finds the case you never considered... Of course, do the cases to find the most common errors. The weirdest ones, let the user find them.

Collapse
 
coolgoose profile image
Alexandru Bucur

As Aaron said below (above ? :P), customers are 'clever'. You need to take into consideration all of the weird things they might do including renaming files to match extension requirements and that might be either

a) way too time consuming to write tests for all of the cases, and from a business perspective it might not be feasible cost wise.
b) you most likely will miss something

Imho best thing is to treat all user input as junk all of the time, and constantly sanitize and compare with what you actually need.

Also remember that the web is 'typeless', so user input is always tricky to validate.

Collapse
 
jamesrichmond24 profile image
James Richmond

What about legacy code?
Don't you think that in this case, 100% coverage is pretty great?

Collapse
 
teamcoder profile image
Robert Ecker

Yes, having a test coverage of 100% is always great. However, everything we do costs time and money. If I had unlimited resources I would probably also try to write tests for every possible case in a legacy system ;-)

Collapse
 
jamesrichmond24 profile image
James Richmond

And what if there was a tool that creates coverage for legacy code automatically?
Do you know a such a tool?

Thread Thread
 
teamcoder profile image
Robert Ecker

Hehe, if you find such a tool and that tool creates only useful tests then let me know ;-)

Thread Thread
 
jamesrichmond24 profile image
James Richmond • Edited

Will do :)
So what's your recommendation? testing only the main components?

Thread Thread
 
teamcoder profile image
Robert Ecker

It always depends ;-) If tests help you to build your software then do TDD. If you want to decide where to start writing tests for a legacy system then you might ask your stakeholders what’s most important for them and start with the components which are most likely to break and which would cause the biggest damage if they broke.

Thread Thread
 
jamesrichmond24 profile image
James Richmond

Thanks a lot!
You're making good and interesting points :)

Thread Thread
 
teamcoder profile image
Robert Ecker

:)

Thread Thread
 
rvazquezglez profile image
Raúl Gerardo Vázquez González

There are tools like IntelliTest msdn.microsoft.com/en-us/library/d...

Jessica Kerr (in a very interesting talk) mentions a tool called QuickCheck which allows you to run property based testing to find cases where you program might fail youtu.be/X25xOhntr6s?t=20m28s

Thread Thread
 
teamcoder profile image
Robert Ecker

Thanks for sharing Raúl! Do you use such tools? Do they work well for you?

Collapse
 
okolbay profile image
andrew

not like I have seen 100% coverage, but if your testing suite is comprehensive, then uncovered code is dead code, as if it really would have served some purpose, it would be covered by specifications from one of stakeholders?

Collapse
 
martinhaeusler profile image
Martin Häusler • Edited

Risk-based testing is essentially the same idea. Consider where a bug would hit you most often or where it would deal most damage, that's where you have to test. Testing is always punctual, it's never a full proof. A good test suite will, in the best case, detect the presence of a bug. But it will never be able to show the absence of any bug. Code coverage is nice (and comparably easy to measure), but should not the primary metric to strive for. In my experience, getting code coverage higher than 70% (provided that the existing test cases really are meaningful) is hardly ever worth it. Better spend your time on documentation.

Collapse
 
antero_nu profile image
Antero Karki

My main issue with this reasoning is that if it's that unimportant that it works correctly then why build it in the first place, or waste time maintaining it. Even more time since there would be no tests to indicate what's working, what may be broken, how it may be broken etc.

Collapse
 
teamcoder profile image
Robert Ecker

I think in many cases building a component with low test coverage is still more useful for many users than nothing at all.

Collapse
 
silviumarian profile image
Silviu Marian

Code coverage doesn't seem to help with side fx
You get the side effect executed and covered but doesn't imply the side effect is asserted, sometimes you can't even look for it

But it does help remember why code was written in the first place, and forces you to review your own code

Just aim for 100% responsibilities tested, or 100% features tested, not 100% code covered

Collapse
 
symore profile image
RobK • Edited

I assume, this is NOT about unit testing. If that's the case, looks good. Otherwise, this whole thing should be reconsidered or forgotten. It is already suspicious as it's talking about testing without considering different type of tests.