loading...
Cover image for Achieving 100% code coverage will make you a better developer. Seriously.

Achieving 100% code coverage will make you a better developer. Seriously.

d_ir profile image Daniel Irvine 🏳️‍🌈 ・3 min read

Cover image by Brett Jordan on Unsplash.

Yesterday I wrote about one reason why 100% code coverage is worth aiming for. You can read that post here:

Today I want to discuss another reason why. And this one is even more important than yesterday’s. Here it is:

Possessing the ability to achieve 100% code coverage is an important milestone on your journey to being an expert developer.

Think of 100% coverage as a skill

Coverage is a skill, just like being able to code in JavaScript, TypeScript or Python, and just like being able to use a framework like React, Vue or Django.

If you think achieving 100% coverage is hard, perhaps it’s because you’ve never done it!

Just in the same way React would be hard if you’d never written a React app, 100% coverage is hard to achieve if you’ve never done it.

Now answer this question for yourself:

How many times in your career have you achieved 100% coverage?

If the answer is zero, then what excuse have you been using?

Here’s two:

  • code coverage is a useless metric anyway
  • code coverage is too expensive / time-intensive for web applications, and only suited when software failure would be catastrophic

“But code coverage is a useless metric!”

I understand why you’re saying that. You think it’s useless because it’s possible to write terrible tests and still achieve 100% coverage. I agree with this.

It is a useless metric. If that’s what you’re using it for. Here’s a post that does a good job of explaining why code coverage is a relatively useless metric.

But ironically enough, this is exactly why it’s a useful skill to practice.

One, because full coverage it easy enough to do on its own, but it’s hard to do well.

Two, because we have relatively few developer testing goals that can help us get better at testing.

(We use the term developer testing to distinguish between testing pratices that are useful for developers versus QA testing practices).

So the milestone is actually in three parts:

  • Can you achieve 100% coverage?
  • Can you achieve 100% coverage by being honest? Without tests that are designed only to increase coverage, like explicit testing of getters/setters?
  • Can you achieve 100% coverage without overtesting? (You want just enough tests that you get full coverage without having overlapping execution and without creating brittle tests.)

“100% code coverage isn’t worth bothering about for non-critical software, like web applications”

Again, I can understand why you’re saying this. Web applications, for the most part, aren’t of critical importance. Unlike, say, medical appliances or rocket ships.

When I hear the above what I think is “we don’t know how to achieve full coverage without drastically reducing productivity.”

Which again, is totally understandable. Testing is hard.

But there are many, many experienced developers who are capable of achieving full coverage at speed. They can do that because they were motivated enough to get good at testing it, and they took the time to learn how to do it well.


I’m sold. How can I learn how to do this?

  • Start using TDD. You can learn from books like my React TDD book.
  • Ask experienced testers to review your tests. Feel free to send PRs my way, I’ll happily look at them!
  • Use side projects to learn, so you’re not putting your paid employment at risk when you’re figuring out how to make things work. Carve out some time in your day to learn.

Once you know how to achieve coverage and achieve it well, code coverage becomes far less important...

Personally I very rarely measure code coverage. My TDD workflow means I’m at 100%. That’s not to sound conceited; at some point in my career, getting to 100% coverage was an important goal. But now I know how to do it, I’m working towards other goals.

As I said above, developer testing suffers from having no clear ways of improving, and we have no objective ways of measuring our testing performance.

There are many milestones on the road to be an expert developer; like being able to refactor mercilessly, using TDD and being able to apply the four rules of simple design.

100% coverage is a great first milestone.


Posted on by:

d_ir profile

Daniel Irvine 🏳️‍🌈

@d_ir

I’m the author of Mastering React Test-Driven Development, available now from Packt. I run the Queer Code London meetup.

Discussion

pic
Editor guide
 

You've probably seen this image before, but this is what 100% code coverage can look like:

It even passes all tests.

This is what will happen when you make high code coverage a strict requirement for developers.

 

Error: Expected an instance of W, got M instead.

 

According to the posted dashboard the assertion held up, so it must have been a W which was received.

 

I hadn’t seen this, thank you for sharing :)

 
 

This picture looks like something out of the opening scenes of idiocracy

 

But there are many, many experienced developers who are capable of achieving full coverage at speed.

On your last post, I commented that I don't care about getting 100% coverage in web apps, and I stand by what I said :)

It's not a question of being capable of getting to 100%. It's a question of testing the code you write, and tests bringing you confidence.

Sometimes, you test things that are already covered in another package (e.g. your models' __str__ method in Django - Django should (and does) test that, not you.)

Personally, I wouldn't bother. If your preference is to go ahead and cover such cases to get 100%, that's cool. No need to act smug about it, though, IMO.

 

I can't remember where I was reading, but the author made a point that too many new developers are too obsessed with 100% test coverage and that it can end up eating up too much development time. The authors point was you should test the most important parts and be happy with maybe 80% test coverage in order to save some time.

Also another issue with 100% coverage is when one thing changes about your code you have to reevaluate your tests and make changes and maybe add more etc. It can be a time suck to try get 100%. Some languages have better testing tools such as Scala or even Java that help cut down on the time required to get as much of 100% test coverage as possible.

 

[The] author made a point that too many new developers are too obsessed with 100% test coverage and that it can end up eating up too much development time. The authors point was you should test the most important parts and be happy with maybe 80% test coverage in order to save some time.

I have no problem with 100% test coverage in some cases. I have 100% coverage in a library I wrote, as it's easy to test, I wrote all of the code, and it adds value to test everything.

That's the key thing to me - how much value does adding more tests add?

If I am using a framework like Django, testing that Django isn't broken doesn't add value to me, as it has its own test suite.

from foo.models import Foo

def test_foo_name_matches_bar():
    my_foo = Foo(name="Bar")
    assert my_foo.name == "Bar"

Congrats - Django isn't broken. Completely useless test, IMO.

Also another issue with 100% coverage is when one thing changes about your code you have to reevaluate your tests and make changes and maybe add more etc.

You should refactor tests just like you refactor code. Don't get sucked into the sunk cost fallacy that just because you wrote some tests at some point and later realize they don't bring value, they are untouchable because you spent x hours writing them.

If they are harder to maintain than the value they bring, re-write or even delete them. If you're confident with your code, even with the coverage dropping a bit, that's fine with me.


Basically, be pragmatic, not dogmatic, when it comes to testing (and most other disciplines).

 

martinfowler.com/bliki/TestCoverag...

"If you make a certain level of coverage a target, people will try to attain it. The trouble is that high coverage numbers are too easy to reach with low quality testing. At the most absurd level you have AssertionFreeTesting. But even without that you get lots of tests looking for things that rarely go wrong distracting you from testing the things that really matter.

Like most aspects of programming, testing requires thoughtfulness. TDD is a very useful, but certainly not sufficient, tool to help you get good tests. If you are testing thoughtfully and well, I would expect a coverage percentage in the upper 80s or 90s. I would be suspicious of anything like 100% - it would smell of someone writing tests to make the coverage numbers happy, but not thinking about what they are doing."

 

I agree with Martin Fowler and respectfully disagree with your take - I think it's important to avoid being dogmatic about these things.

Saying you need "X" to be an "expert" developer is an unhelpful hot-take. Software development projects, as with many things in life, are rarely black-and-white.

 

Kyle, I agree with Martin Fowler on this too. I’m not dogmatic about it either (despite what you might think from my writing 🤣).

My point with this post is that the skill of being able to achieve 100% coverage is a great skill to possess as a developer.

Not that I’d always need to achieve it on every project.

I too am suspicious of 100% coverage. That’s my point about honesty above. Being able to achieve 100% coverage without cheating is difficult.

One thing I’ve learned from writing about code coverage is that it’s hard to get across the message that I’m trying to. It’s unusual for writers to frame code coverage as a learning/growth tool. I’ll keep trying!

 

I reached 100% coverage on ~1500 LOC project ( a language actually) a while back, but I did not bother too much since I tried github actions.
It helped get better at testing, for sure, but also at coding.

Nice Article, thanks.

 

EDIT: it is ~15000 LOC, not 1500. my bad

 

I would say that 100% coverage is useless because you're chasing the wrong metric. Covering all branches doesn't mean you're covering all use cases. We should be chasing case coverage instead. The problem is, of course, there's no way to measure that reliably and automatically.

Additionally, there are parts of many applications (especially web apps, but not exclusively) which are inherently integration points and should not be unit tested at all, so 100% coverage is actually bad there because you're unit testing integration which is not only against the idea of unit testing but requires a ton more work. Testing these pieces of code require mocks and other stand-ins which are almost never required in our unit tests.

Having said all of that, we do in fact aim for 100% coverage of the non-integration files. But again, that's only because it's measurable and case coverage really isn't.

 

I see what you did here. Controversial thoughts always catch people attention. But seriously tests are also code, additional code to maintain, more tests give more safety in moving forward, but also slows moving forward, yes really.

Some say - tests allows move forward faster, as you invest in the beginning and it pays off after. Ye, ye - seen this pay off during removing of tested code modules, or when requirements changed, and tests are thrown to the trash. I see static code analyses, static type systems as the way to go, tests are useful in some amount, but never too much, and never code coverage as a metric, never.

 

Take this post for example:

He has a cool example showing 100% code coverage does not mean good or correct code is being tested:


Here, we call GetAnswerString with 2 and 2. This method should give us back “The answer is 4”.
Unfortunately, the developer didn’t really do addition and the method always returns “The answer is 42”.

Unfortunately, the unit tests are just built to ensure that the string starts with the expected prefix, so the actual value isn’t tested.

As a result, we have 100% passing tests and a blatantly incorrect method.

Just because a line is executed by a test, doesn’t mean that the line is correct or accurately tested.
@integerman

 

Yes, but also no. Or maybe a "Yes, but...".

To quote Marilyn Strathern's phrasing of Goodhart's law: "When a measure becomes a target, it ceases to be a good measure." (source)

The goal is to have well-tested software, and code coverage (in one of many shapes) can be used as a metric for that. Or better: the lack of code coverage is a smell indicating that software isn't well tested. However, just because you haven't detected a smell, doesn't mean there isn't a flaw.

So sure, aim for 100% test coverage, but be careful to only use good tests (there are plenty of tests that cover a line but don't test it), and don't think that 100% means "job done". It's just that one of the check boxes has been ticked.

Side note 1: code coverage comes in many different shapes and sizes, with varying demands and usefulness.
Side note 2: I'm not entirely convinced that 100% should be the threshold value, always and ever. It depends on the kind of code and environment you're in, and should depend on the needs of you, the company, or the customer. I would suggest starting with a 100% goal, and see where that is or isn't feasible and where it is or isn't helpful. Maybe you should have different targets for different kinds of code, as testing everything upfront might cost more than fixing some things afterward.
In the same vein, doing a project in the waterfall style, UMLing all the things, once in uni made me a better developer, because I now have these tools in my toolkit. However, I wouldn't recommend anyone doing this in a production environment unless there is a strict need to design and document everything upfront (although I'm not sure if even aviation would require this nowadays).

 

One of the reasons I like Scala is because it has excellent testing tools. The tools really shorten the amount of code it takes to fully test your code. One of the tools ScalaCheck works by you basically telling it what extremes you functions should handle it then tests those boundaries and then outside running hundreds of tests for you with a only few lines of code.

 

It seems like you're very passionate about testing and programming, Daniel. I don't want to seem like an asshole, however, my twelve years experience in the industry as a developer has time and time again shown that 100% code coverage is not worth striving for.

In the early days of my career, I used to think you had to aim for 100% code coverage. That is what developers used to be told you had to aim for. But, once you reach a certain number (70-80% approximately) anything beyond that becomes increasingly difficult to achieve (the last 10% takes 90% of the time).

I think you and everyone else would agree that any tests are better than no tests. That aiming for an achievable and realistic target of say 70% is better than not focusing on tests at all. The issue with aiming for 100% coverage is developers inevitably will start to take shortcuts as they become fatigued with striving for 100% (which is quite difficult to do).

Instead of writing quality tests, developers will start writing tests for the sake of bumping up numbers in the coverage report. As an industry, we do not do enough to educate developers on how to write good tests, we just tell them to do it and leave them to their own devices (at least, in the front-end space we do).

In the webspace, the ecosystem is comprised of packages from Npm. It is not uncommon for a modern application to have tens of these Node packages, each with their own dependencies. If you are aiming for 100%, it is unavoidable you will be writing tests for these dependencies, having to mock them to get your tests to work and inevitably, testing packages that might already have tests.

Not to push my own agenda or advertise my own articles, but I wrote a comprehensive guide to unit testing on the front-end not too long ago. In my article, I have a section on writing good unit tests, advocating for developers to not just write tests, but spot bad code and refactor it. Testing bad code doesn't make it any better or safer, but bad code without tests makes it harder to refactor it.

 

Not sure if you’re just trolling or not, but with my FIFTEEN years of experience in the industry, I’m curious why you’d assume I’m not experienced in the topic I’m writing about 😉

It’s not that I disagree with what you’re saying, but I think you missed my point: as someone who has spent years teaching TDD and software design, I know how important it is to have clear goals that people can aspire to. With good TDD practice, 100% code coverage is straightforward and can even lead to quicker software development.

I enjoyed your article about unit testing. I’d be interested to hear what you think of my book, Mastering React Test-Driven Development. Let me know your feedback if you do read it.

 

Great points here! Difficult to achieve but worthy to try to get 100% code coverage for becoming a much better developer!

 

Thanks for posting, interesting read 👍. Here's one more perspective from @localheinz i enjoyed.

 

You are correct. Your time is better suited in testing the components that do something.