DEV Community

Cover image for 100% Code Coverage is a Lie 🎯
Leonardo Montini for This is Learning

Posted on • Originally published at leonardomontini.dev

100% Code Coverage is a Lie 🎯

On a project I finally hit 100% Code Coverage 🎯 what could go wrong now? I tested ALL lines of my code, there are no bugs! Well... not really.

If your only goal is having high coverage, you're probably doing more harm than good, wasting time on near-to-useless tests just to see the green coverage report. And bugs might still be there.

Imagine adding tests for simple getters and setters or an empty constructor with no logic. Do they increase the coverage? Yes. Do they add value? Nope.

The goal of tests is to ensure that the code works as expected, not to increase the coverage. If you're not testing the business logic, you're not testing the code. Many projects have to meet certain coverage thresholds, which might make sense if you see it as an enforcement tool to ensure tests are written, but it doesn't have to be the goal or the only thing that matters.

Sure, if you validate that certain lines are tested it's better than not having them tested at all, but as long as edge cases are not covered, that coverage metric isn't worth much.

Those reports should be used instead as a metric to indicate which parts of the code might need more testing. Once identified, forget about the % and write robust tests on the business logic, including possible edge cases.

Beware: Code Coverage is a tool, not a goal.


If you're curious to hear more about my opinion on Code Coverage, I recorded a video and you can find it on YouTube!


As a bonus content, to further explain the concept, there's a joke on this tweet that makes it really easy to understand:

Tweet

Code coverage is not enough and edge cases are not enough still. You need to test the business logic, not the code. Coverage will come naturally.


What do you think about code coverage? Do you like having coverage thresholds in projects? How high? Let's discuss in the comments!


Thanks for reading this article, I hope you found it interesting!

I recently launched my Discord server to talk about Open Source and Web Development, feel free to join: https://discord.gg/bqwyEa6We6

Do you like my content? You might consider subscribing to my YouTube channel! It means a lot to me ❤️
You can find it here:
YouTube

Feel free to follow me to get notified when new articles are out ;)

Latest comments (13)

Collapse
 
damianreloaded profile image
DamianReloaded • Edited

Tests are useful to detect when changes to the code break dependencies that don't necessarily break the build. The most clear example would be database interaction when changes to the code and to the database are done separately.

Collapse
 
kinsondigital profile image
KinsonDigital

I think it depends. If the goal is to get that extra green, then no. Don't do it. But I don't do it for that. I do it to test data as well as to make sure nothing has changed with the getters and setters.

Collapse
 
raguay profile image
Richard Guay

I'm a programmer with an ASIC design background as well. I once designed an ASIC chip that had 100% test vector coverage (that means every electrical route in the ASIC chip was tested). But, chips would still fail due to other problems (mostly chip mounting issues).

The same thing is in software. Even with every line of code covered, there are ways to break almost any software there is. That is because it would be impossible to test every way code could be used. As the author said, test coverage is a tool, but it should never be the goal.

I've never seen an article or book about creating test coverage for user found errors in the code. That would be the best types of test to write: actual areas of failure in the past to insure that they don't come back in future versions. Spending more time of these types of test would be of more value.

Collapse
 
richardforshaw profile image
Richard Forshaw • Edited

I agree with this. The key sentence is:

"The goal of tests is to ensure that the code works as expected, not to increase the coverage."

Everyone should live by this. In fact philosophies such as TDD and BDD enforce this by defining what the functionality is first and then writing a test which will verify that the code meets this expectation. Simply adding a test to increase physical code coverage does not usually map to a code function (although sometimes it will)

However I don't 100% agree with this:

"If you're not testing the business logic, you're not testing the code."

You can test code without testing business logic. Programming languages contain limitations, just as much as the functionality you are trying to implement also comes with constraints. This is especially true if you are implementing a stand-alone API which does not control what it receives in its inputs.

E.g. if the database as a maximum commit size then the stakeholders should be informed how this impacts the user, and there may need to be a limit on what the user can do in one go.

Collapse
 
eljayadobe profile image
Eljay-Adobe

"Code Coverage is a tool, not a goal."

That is a quotable quote!

I worked on a big project that was at 73% code coverage with unit tests. Us devs were very happy with that. There were some folks (not devs) who were using the code coverage as a metric, and wanted it to be higher.

That made no sense to us devs.

The value of doing test-driven development is that the unit tests are a forcing function to make the code follow most of the SOLID principles. It makes the code avoid hidden dependencies, and instead use dependency injection or parameter passing. It makes the code more robust, more malleable, reduces accidental complication (one kind of technical debt), higher cohesion & lower coupling. Highly coupled code is not unit testable code.

In my opinion, the value of SOLID is that OOP has some deficiencies. Over time, those deficiencies were noted and SOLID was devised as countermeasures to shore-up were OOP was lacking.

The primary value of TDD is that it forces SOLID.

The secondary value of TDD is that it allows aggressive refactoring, with confidence.

The tertiary value of TDD is that, as an artifact, there is a test suite that should pass 100% of the time reliably, and run very quickly (a few seconds). Which ensures basic correctness of the code. And if it doesn't pass there is either regression, or a non-regression bug in the code, or some of the tests in the test suite no longer jibe with the code (a bug in the tests).

Collapse
 
liamjoneslucout profile image
liam-jones-lucout

Agreed that 100% coverage is not 100% confidence, however it's a good place to start, and better than 90% coverage. I consult, and every single place that I've been that has a quality gate below 100% coverage magically manages to leave the most complicated code untested.

For stuff not worth testing I usually mandate comment labels to turn off coverage checks for those lines. That way the untested lines are explicit, and the coverage gate can stay at 100%, forcing developers to either write tests, which can be reviewed for efficacy, or declare a line not worth testing, which can be questioned.

Spotting something that isn't there on a PR without running it is usually very difficult, especially of a branch isn't tested or something like that.

Collapse
 
cicirello profile image
Vincent A. Cicirello

I certainly agree with the essence of your post. High coverage isn't sufficient. Tests must include edge cases. High coverage without the right assertions tells us only that lines were executed but not if they are correct. Etc. Etc.

There's one thing that I always see in posts like this concerning test coverage and that is your example of getters, setters, empty constructors:

Imagine adding tests for simple getters and setters or an empty constructor with no logic. Do they increase the coverage? Yes. Do they add value? Nope.

If one assumes that tests for other things will cover these, but report shows they were untouched by tests, then do you need these getters, setters, etc at all? That is a question that should be asked if you have methods, etc that you don't feel should be tested.

If you need them, then you need to test them. Does that empty constructor initialize the object correctly with the specified default behavior? Does that object behave as specified if initialized with no parameters? Does that setter actually set (e.g., maybe someone forgot to implement it and it has an empty body)? Does that setter set correctly (e.g. maybe it must compute something first)? Will it continue to set correctly if class changes in future releases (e.g. now it is simple this.x = x but later someone has reason to change class fields to eliminate x and define a y instead, thus requiring that setter to become y = f(x))? If you are testing that setter to begin with, you can detect a regression if one occurs during such a refactoring. Same potential issues with untested getters.

Collapse
 
alexr profile image
Alex (The Engineering Bolt) ⚡ • Edited

Test coverage on its own is not a good tool or metric. Using it as part of TDD would allow an engineer to write better code and forces them to think about edge cases. It's the process not the goal of covering with tests.

When adding tests you should balance with unit, functional and integration tests to make sure that e2e app behaviours are captured.

Collapse
 
srodrigodev profile image
Sergio Rodrigo

Agree with this. However, it usually turns into a excuse to write untested code either by bypassing TDD or just out of laziness. Testing 100% of behaviours (as opposed to just lines of code) should be the goal. You don't need to gest getters (they'll be tested indirectly when testing other code anyway), but pushing untested code branches is not cool. I'd say 95% of the time, this argument turns into a excuse to not be professional, even if the underlying principle is true.

Collapse
 
lexlohr profile image
Alex Lohr

Code coverage only tells you that code has run during tests. It's mostly useful to find out which branches of your code are untouched by tests so that you may consider if it is worth testing them.

It could very well be that a part of the code runs, but there are no assertions to cover the result, which means you get 100% coverage and 0% confidence. What you actually want is to get out the most confidence from the fewest tests possible, so don't test what is a) already known to work (e.g. that an event triggers a handler, you already know that) and b) irrelevant to the outcome of your use case.

Collapse
 
ant_f_dev profile image
Anthony Fung

Tests are good, but cover only what is needed. I've seen some people bend the code completely out of shape for the sake of saying that it was built with TDD. It made the code more difficult to follow, and it didn't actually test the scenario properly.

Another downside of too many tests is that the code becomes very difficult to modify if requirements change.

Collapse
 
ravavyr profile image
Ravavyr

fully agreed, but then again writing tests is so nice...
And then you see a form and just hit the submit button without entering anything and watch it either

  • submit empty data
  • error out without a nice human friendly message
  • do nothing, no response, nada

When devs can't be bothered to even write basic error responses and basic form validation because [html5 has required!]....psh, tests are pointless.

Collapse
 
polterguy profile image
Thomas Hansen

We have 100% on (some of) our projects. If you're writing library code it's arguably a must ...

But I see your point ...