DEV Community

Cover image for Do you aim for 80% code coverage? Let me guess which 80% you choose...
Daniel Irvine 🏳️‍🌈
Daniel Irvine 🏳️‍🌈

Posted on

Do you aim for 80% code coverage? Let me guess which 80% you choose...

Cover image by Joost Crop on Unsplash.

Are you one of the many developers that believes that there’s a sweet spot of code coverage?

Usually it’s something like 80%. You may have set up your CI environment to fail your build if it drops below that percentage.

Your team be subject to this CI threshold even though you personally have never given thought to why that threshold exists.

People believe in this threshold because 100% is difficult to achieve. The belief is that reaching for anything beyond 80% takes a great deal of energy for little benefit. Put another way, code coverage is governed by the law of diminishing returns.

I think 100% code coverage is worth aiming for, and here’s one reason why.

The 20% you leave uncovered is probably the 20% that needs it the most.


The notion of Just Enough Coverage and why it doesn’t work

Which 80% do you choose? What 80% is being tested? And what’s in the 20% that isn’t?

There are a couple of obvious choices:

  • test the 80% that’s most straightforward / easiest to test
  • test only the happy path

Covering the easiest 80%

If you leave the most difficult 20% untested, then you are building a knot of spaghetti code. So you have this 20% of spaghetti code festering in the core of your product.

That in itself is a risk. Spaghetti code has a tendency to grow and grow in a way that well-tested code does not.

The reason for this is simple: when we add on top of covered code, we are very confident about refactoring (pulling out abstractions, keeping the code concise, simplifying, etc). Conversely, we shy away from refactoring code which isn’t covered.

So eventually that 20% becomes a liability, and it becomes harder and harder to get to that 80% as the 20% pushes outwards. Life becomes difficult.

Covering only the happy path

If you’re only writing test coverage for happy path, this suggests that error-handling logic isn’t as well-factored as the rest of your code (assuming, again, that your tested code is in better shape than your untested code).

But unhappy path logic is often more critical to get right than happy path. When errors occur, you need to ensure that they are reported correctly and that the effect on the user is limited. Perhaps the code you write will be alerting monitoring systems that something went wrong, or perhaps it’s placating the customer and letting them know how to deal with the issue. Either way, it’d be a disaster if that code didn’t work.

Not to mention that the happy path is the path you’re most likely to manually test.

So writing automated tests for the unhappy path is arguably more important than writing automated tests for the happy path.


Aiming for 100% coverage is not everyone’s cup of tea. Regardless of whatever lower threshold your CI environment requires for a passing build, aiming for full coverage is a great way to level up as a developer, and a very noble goal.

If you’re interested in this topic and you’re a front-end developer, you may be interested in this talk:

Happy testing! 🙏

Top comments (29)

Collapse
 
tomekbuszewski profile image
Tomek Buszewski

Aiming for 100% code coverage is something I'd really love to do. But I don't have the time to write all the tests I want. Do you really have the time to do it?

It's different while working with TDD approach. Then high coverage is actually something that comes out naturally, as tests also serves as requirements. But this is not everyone's cup of tea, and "forcing" people to use this methodology for the sake of coverage is just bad.

In my opinion, writing tests should start with edge cases. Writing simple cases is nice and will up your coverage, but – it will be tested manually, it probably have been tested by you and – given things around work – should work, if there is no logical errors. But edge cases – missing properties, malformed values, etc. – this is the place to write tests for.

Collapse
 
patryktech profile image
Patryk

Everyone should learn TDD, IMO.

Doesn't mean you should always use it.

I like TDD if I'm writing pure functions with very well defined rules (recent example, pluralization rules for different languages, since many Slavic languages have similar but not identical rules).

Not as big of a fan with web apps, as I find it restrictive, and I'd rather add tests after code then (but maybe that's just because I'm bad at design :D).

Collapse
 
tomekbuszewski profile image
Tomek Buszewski

There are miles and miles between knowing something and using it. For example, TDD doesn't work when you need visual regression testing (or maybe it does, but I don't know how to implement it).

I really love TDD and often find myself spending too much time writing tests for my apps and then being too tired to write the actual code :D

What do you mean by web apps?

Thread Thread
 
patryktech profile image
Patryk

What do you mean by web apps?

I mean I personally wouldn't use TDD to develop apps that are run on the web (essentially a buzzword for a website that has more functionality than an informative company or personal page, such as booking.com, a taxi booking app, etc. - anything with a lot of dynamic content).

I can use it for some functionality, but not the whole thing (in contrast to Obey the testing goat that advocates always using TDD).

Thread Thread
 
tomekbuszewski profile image
Tomek Buszewski

Such apps, in my opinion, should consist of modules. And a lot of modules, especially the ones that are just emitting API, can be done with TDD. It's a whole different story with front-end, but even there you can try TDD with E2E tests. Because you only define "handlers", and they can be defined before the coding starts.

Collapse
 
miniscruff profile image
miniscruff

In my own experience, having and keeping 100% code coverage has improved my speed. My micro services/bots are consistently running with the least problems and require very little maintenance. Combined with full CI, CD and linting it is very fun to work on. I work solo mostly just due to how the team is but when they do work on it or see it they are impressed.

The confidence I have is really high, I am on a 12 day vacation and know that there is a very high chance not a single one of my services will have any issues. Note that this does combine with k8s / some orchestration tool as we do have server patching and the time.

This doesn't even take into account the speed and ease of adding new features on top of something with high test coverage. It's great, would recommend.

Collapse
 
tomekbuszewski profile image
Tomek Buszewski

Hey miniscruff!

I never said that having high coverage is bad. All I am saying is, developing it takes a lot of time. In a perfect world, everyone would have time to write all the tests they want. But reality is different, and sometimes you just have to ship the feature.

Thread Thread
 
miniscruff profile image
miniscruff

Sure, in these cases I make sure to communicate before and after the feature is developed that it has no automated test suite. If my PO/boss is pushing for a faster release they hold responsibility for the outcome.

Technical debt is a big concern of mine and I'm not going to add to it without permission. Over time most people will give time for testing if you can prevent bad releases. Especially bosses that are ex developers.

If not, might be time for a change lol

Thread Thread
 
tomekbuszewski profile image
Tomek Buszewski

Speaking as both developer and manager here. It's never an easy decision to make a technical debt, but sometimes it is the best one, others being either grind overtime to the physical limits or miss the deadline that was announced publicly.

Sometimes shipping slightly broken software is better than missing the shipment at all.

Giving time to write tests is obvious for most, and for business people we just say "development will take X days" while we know that X = development + tests. But like I've said earlier, you don't always have enough time to cover every single corner case.

To mitigate technical debt, I try to introduce (I did it before, now I am trying this in a new organization) technical sprints, for exemple every third sprint is purely technical and maintenance-based. This is the time for making a 100% coverage and refactors.

Thread Thread
 
miniscruff profile image
miniscruff

I have heard of the technical sprint before but never been apart of any. I am always skeptical of putting engineering time aside until later as, and this may just be my experience, that "free" time is the first to go when deadlines, requests or issues arise.

I mean, 100% for it if it's consistent, but whenever I try and get that stuff going like 75% of the time it's cut short or entirely.

Still, I agree that there are always conditions that fall outside an "ideal situation" and I have many times forgone tests to ship something.

I generally advocate for the best-case scenario that way when there is push back from management or other devs I have some wiggle room. But if I ( at least for me ) push for general "let's test a lot but not 100%" ideas that get pushed back until it just becomes "testing is good but whatever you want to do is fine by us".

That is if I say 100% is the minimum with linting, edge cases, and e2e tests. That will get negotiated to around 80% + linting. But if I were to push for 80% + linting, we may end up agreeing on 50% only without linting.

tl;dr I agree with everything you say, I just add a little extra so that what we both agree on actually sticks with the whole team. Because on a team of 10, it doesn't matter if I do 100% with TDD if the rest of the team doesn't write any tests.

Thread Thread
 
tomekbuszewski profile image
Tomek Buszewski

I am always skeptical of putting engineering time aside until later as, and this may just be my experience, that "free" time is the first to go when deadlines, requests or issues arise.

That's when you plan for multiple sprints ahead and, when estimating or agreeing on a deadline, you say "4 sprints" and you already know that one will be purely tech.

Well, bottom line is, tests should be written from the hardest to the easiest, not the other (obvious) way around.

Collapse
 
patryktech profile image
Patryk

Test coverage doesn't really mean much, and every situation is different.

In a library I wrote, I use a Python dict to store a bunch of lambda functions, and have another function that gets one and calls it.

Just because the code is "covered" it doesn't mean that there are no bugs in it. (Of course, I have tests that give me confidence that it works, as I've used TDD to write those rules and test each function extensively).

As far as pytest is concerned, if it hits the dict definition, that code is covered. It doesn't care about calling the lambda functions.

As it's a reusable library, I really like testing it extensively - and it's very easy to get to 100%.

If I work on a website / web app, I will test some back-end things in pytest, some front-end in jest, and some in cypress, and worry more about being confident in my tests than getting to some magic threshold.

Collapse
 
miniscruff profile image
miniscruff

Agree, code coverage to me is a sanity check in things I may have missed. But TDD or similar helps give me the confidence I need.

Having a test cover a line of code is not as good as being confident it works. But, I know if it's not covered I am not confident at all it works.

Collapse
 
patryktech profile image
Patryk

I may be reasonably confident something works even without automated testing. My threshold for acceptable confidence varies on whether something is a side project, or an app that handles customer payments, or if I'm prototyping as opposed to writing production code.

Sometimes, manual testing is good enough for me... Context is important.

I do love automated testing, for the record, and think it's important. Just be pragmatic about it.

Thread Thread
 
miniscruff profile image
miniscruff

Sure, I tend to skip testing entirely for the first release. Gotta get something out there first.

I also accept manual testing IF you only have like 2 end points but probably only for a single update or temp fix. Long term nothing beats automated tests.

Collapse
 
ogdenstudios profile image
Tyler Scott Williams

Love it. I almost wonder if there's value in saying "aim to cover the most difficult 20% of your codebase, and the rest is icing on the cake".

If you assume that 20% of what you've got is difficult to test, but critical to function. Cover that first. The rest should fall into place nicely with easy-to-reason-about test cases.

10/10, thanks for this perspective.

Collapse
 
yanborowski profile image
Yan Borowski

Testing: Like you said, The 20% is the part that really need and you don't (never will) cover.

I love the Titanic Sample:

Titanic 80% Code Coverage

Collapse
 
jcoelho profile image
José Coelho

Hmm very interesting position on code coverage.

In my short experience with unit testing, focusing on code coverage only frustrates and slows down developers.
I would rather aim for meaningful and purposeful tests, and coverage as a secondary goal.

Have you been able to keep 100% coverage on a project with other developers? Genuinely interested 😄

Collapse
 
d_ir profile image
Daniel Irvine 🏳️‍🌈

Thinking in terms of goals is something I’d agree with, and I also agree that coverage should be a secondary goal IF achieving it would be detrimental to team output. But I do think it’s an important aspirational goal for developers who want to get better at testing. (I’ve got another blog post about this topic, coming soon...)

I’ve worked for a TDD / XP consultancy before that always aimed for 100%. I don’t think they’d ever slip below 95%. I imagine that any company that is serious about TDD will be looking for close to 100% coverage.

Worth noting that you can still achieve 100% coverage in your OWN code (using TDD) even if the codebase overall doesn’t reach 100%.

Collapse
 
thisotterbegood profile image
Bernhard Götz

I agree with you. Usually there is a good reason why those 20% are not tested. They haven't been written testable. Writing testable code for experienced TDDevelopers is easy. They separate construction, business and untestable segments (UI e.g.) even if there is no unit test. There is a study that suggests that experienced TDDevelopers produce the same quality even when not practicing the TDD cycle.

For the people who say 100% code coverage is too expensive: You just exposed your lack of knowledge about TDD. TDD guarantees that you develop just as much code as you need. If you do not do TDD you will have more code, even redundant and useless code. TDD is about saving money and time, not wasting it to create some kind of beatiful code zen garden.

Collapse
 
recursivefaults profile image
Ryan Latta

Code coverage always sparks a good debate.

I like using it with teams to reflect on coding/testing practices. So we're at 80%, what would it mean to get to 85%?

It isn't the % that really matters because, as you've stated, the type of tests and the reason they exist matter more. I use the coverage to coach through those assumptions.

Collapse
 
bugsysailor profile image
Bugsy Sailor

I apologize, but can you explain what "code coverage" is? #newb

Collapse
 
patryktech profile image
Patryk

You can see an example report coverage here, from one of my projects.

When I push my project to gitlab, Gitlab-CI runs my test suite, and says which lines were hit by the test run, and which were missed in each file (total coverage is 90%).

For instance, it can tell you that your tests cover only one branch (e.g. the if), but not another (the matching else), or if one of your functions isn't covered at all.

Collapse
 
bugsysailor profile image
Bugsy Sailor

Thank you.

Thread Thread
 
patryktech profile image
Patryk

No problem. If you're not familiar with testing, I'd encourage you to look at unit testing tutorials for the language you work with.

For JavaScript, I love jest. For Python, pytest is great. They both support coverage reports, though I'm not sure if jest generates one as HTML or only in the CLI.

There's plenty of tutorials on YouTube, probably some on here too.

Collapse
 
storrence88 profile image
Steven Torrence

It's a term for what percentage of your app is covered by tests.

Collapse
 
rolfstreefkerk profile image
Rolf Streefkerk

Probably the thing to note here is, what kind of environment this software is running in? If it's medical software where correctness is paramount, I understand 100% code coverage is a must.

In regular web applications, I believe it's mostly wasted effort. You should aim to cover key area's of your application and not necessarily everything because it has a cost whenever changes have to be applied.

Collapse
 
etampro profile image
Edward Tam

I believe in 100% coverage. That doesn't mean that you should write test for every single line of code you write.

Most of the code coverage measurement tools come with features like filter. My 100% may be covering the same thing your 80% does, but the difference is that my 100% includes effort to examine the 20% not covered are truly things that I do not care.

Collapse
 
wagslane profile image
Lane Wagner

I disagree... Depends on the kind of code. In libraries then sure 80% or more is probably great. In backend crud apps? 80 is likely overkill. Code coverage is a fairly silly statistic imo