DEV Community

Code Coverage is Useless

John Reese on February 21, 2019

Not too long ago there were talks around the office regarding a new testing initiative. Now, by itself, this is fantastic news. Who wouldn't want...
Collapse
 
juliuskoronci profile image
Igsem

I stopped reading somewhere in the middle. Test coverage for every project should be set to 100%. Test coverage doesn't mean you will not have bugs but it is proven that projects with higher test coverage have significantly less bugs. 100% is a must and if you feel like some file is not worth testing (although never seen such) then exclude it from the coverage. The cost of bugs are several times higher then the cost of writing tests hence write tests, don't be that lazy know it all who thinks his code is the best and it will not change and why and what not..excuses are nice..in the meantime I am going forward with full coverage and almost no bugs :) ..TDD was invented for a reason you know :)

Collapse
 
johnpreese profile image
John Reese

As highlighted in the article, if you're using TDD, I'd almost argue setting coverage is redundant. It's nice as an enforcement mechanism, but TDD is going to have you at or near 100% coverage anyway.

The moral here is that while you may be leveraging TDD like a rockstar, you may have other peers on your project who are not and will do silly things just to hit the 100% goal.

I personally feel coverage works best as a metric. It should stay consistent or be on the upswing. A little dip here and there is perfectly acceptable, but a consistent drop would warrant some investigation.

Collapse
 
oskarkaminski profile image
Oskar • Edited

Agree. Decide what should you test, and then ensure 100% coverage.
Ask why automatic tests exist. Testing all edge cases within the complex logic in your head is impossible. Humans have scarce of cognitive capacity for it. So automatic tests are must have there.

Although testing if React component is properly rendered is something what usually doesn't bring any value and cost time for coding and maintenance so probably you don't want to do this.

Collapse
 
mtrantalainen profile image
Mikko Rantalainen

I agree. Having ten files with full coverage is better than all files with 80% coverage. This is because having full coverage for ten files allows applying mutation testing against those ten files. Once you get the tests good enough that mutation testing cannot find missing tests, then you can be pretty sure that the tests really test the full implementation insteard of simply run every line of code once and cover random part of the actual implementation.

I mean, 100% coverage is not bad thing itself but that does not mean you have good tests in reality.

Collapse
 
jamesmh profile image
James Hickey

Awesome - totally agree.

I typically use a use case based architecture (similar to clean architecture or a DDD type request/response model) and generally only test the use cases. Since these are the only entry points into the business logic, testing the actual code that will be initiated by users helps.

And of course, not every use case needs a test. But if they all do, it still doesn't mean 100% coverage. But that's fine since I know at least what users will actually use is (mostly) covered.

Collapse
 
johnpreese profile image
John Reese

Sounds very similar to how I've started to reason about whether something needs a test or not. I used to be in the camp of "test everything" and if class X didn't have any tests it just felt.. wrong.

Though I've now shifted to really only testing things that have actual logic to them. If it's just a bunch of procedural lines, I tend to let it slide.

Collapse
 
jessekphillips profile image
Jesse Phillips

I think the risk plane is key to many choices and so many factors play into it. Automation is necessary and machine learning could play a role in that. But they have cost and consideration are important.

If you rewrite your app in a new framework every year, you loose the value of that code coverage quickly.

Collapse
 
johnpreese profile image
John Reese

Risk plans and opportunity costs were pretty eye opening for me. Not all applications are created equal.

Collapse
 
smizell profile image
Stephen Mizell

Nice take on the topic, especially about considering risk and context.

I hope it's OK to share an article on here I wrote about test coverage. I came to the same conclusion along a different path.

Summary: test coverage isn't really about how much logic is covered. It tells you what percentage of the code was executed during a test run. The "uncalled code" metric is the only helpful metric.

Collapse
 
zimski profile image
CHADDA Chakib

Thanks for your post.

For me, the coverage is almost related to the "how often i will update this code in the futur"

For me this is the key metric to know if I need reasonable test coverage on this part of code or I can test manually once and have a weak coverage.

Also, if you have some code in your project and used by 1% of your customers, you should pay to it as much attention to the rest.

If the code exist it's important, otherwise remove it

Collapse
 
mpermar profile image
Martín Pérez

I agree with your points. I usually define code coverage as an indicator, i.e. a metric as you wrote there. It hints us about the team intentions. 0% coverage? 10%, 20% coverage? Maybe a POC, maybe developers don't care, a warning sign definitely. 30%, 40% well good intentions. 50%,60%,70%... standard engineering practice, write tests, try to prove the main scenarios. More than 80% probably unnecessary as per your points.

Still, code coverage does not prove our code has no mistakes. Some concepts like mutation testing ( e.g. in java pitest.org/ ) try to address this issue and make coverage more trustable and they are interesting as a concept.

Collapse
 
johnpreese profile image
John Reese

Yep! It's all about the context of the application. Some applications can get away with 20% coverage, some might require 80%. After that, we just want to monitor whether or not coverage is decreasing or increasing.

It's a little silly to decree that a small project that's currently running fine, but sitting at 20% coverage, needs an all hands on deck initiative to up the coverage to some arbitrary number.

Collapse
 
rogercampos profile image
Roger Campos

Thanks for the post, I agree in general with your vision. Just wanted to point out an additional benefit of code coverage: helping you find dead and old code. I'm currently working on improving coverage on a big and 8-years old codebase that started at 80-something % coverage, and in doing so i'm also deleting and refactoring a lot of code.

The files with less coverage are usually also the oldest ones, or the ones people considered not important enough to add tests to. So if they're not important, there's quite a chance that time has rendered them useless now. We also have full integration tests on top of unit tests, so if after running the whole suite there are part in the code that haven't been executed, there's a real chance they're somewhat meaningless.

Collapse
 
bennypowers profile image
Benny Powers 🇮🇱🇨🇦

Would you recommend a different strategy for libraries, say, than apps?

Wouldn't you want a given library version to be totally covered?

Collapse
 
johnpreese profile image
John Reese

I don't think that more coverage is a bad thing, especially with something that is a library, and thus is going to be used by multiple applications. One of the key takeaways to all of this is that if we say that "totally covered" is 100%, and that we enforce 100%, you're going to get some unwanted results.

I don't think it's bad to strive for 100%, it's a common occurrence with TDD which I believe in.

I think it's bad when you say all apps must have 100% and we'll even fail the build at 100%. There are going to be cases in which 100% doesn't make sense, so developers will have to get cute to make the build pass.

Put as much testing effort as you believe makes sense. If it's a library, go nuts. Maybe you'll get 100%, maybe you'll get 90%. What matters is that you made deliberate decisions to test what needs to be tested, and the coverage % is a reflection of that.

Then, going forward, you can monitor coverage as a metric. You can see if coverage going down, and if it is, ask yourself or your team why.