In some places and teams I've worked with, I've been asked what % of test coverage we want to shoot for and I've never had a good answer. I personally try to write tests that I think help define or validate a piece of code I work on.
When I run a code coverage scan in Android Studio, it's usually quite a low percentage. However it seems odd to me that you'd have a test for an immutable model that tests all fields.
What do you think? How would you measure coverage?
Top comments (8)
We ignore files that don't necessarily need to be tested at the moment, even if they don't fit the above criteria. Code coverage is inherently pretty arbitrary so chasing a number without purpose is probably not ideal.
If 100% indicates success, set yourself up for 100% by only measuring the important parts. Once you get near 100% you could expand for other nice to haves.
I'll add by saying that code coverage is a pretty "bad" metric, but no metrics are perfect, and I do think it can serve a purpose when used correctly.
Yes! Aim for 100% on the important things, but just as important, do not sacrifice simplicity for the 100%. If you have a pretty high coverage already, and the core rules are covered, 80~90+ should be fine.
The two rules I try to follow when programming are:
If I have to do any of those two while maintaining the code, then something might be off.
If I have to combine 10 components to complete a behavior it's too much work (it might cover the 100% though). If It takes me more than 2 minutes to understand the design, I'm being set up for failure (I'll probably introduce a bug).
The domain and the amount of time you have to manually test things will determine what number your percentage may be (I work on an open source project and since I didn't design it with tests, and I don't have the time any more to test it manually (testing on games take a lot of time), I release almost zero features per year...)
As Woody Zuill would say "Maximize the good". Figure out what works and then maximize it for the best result. No hard or fast rules here: as long as people care for the code base is what matters most.
I agree, I think having some idea of coverage is better than none. I tend to use acceptance criteria to write tests and then break out unit tests where it makes sense. This gives me a pretty good spread of tests as I work.
Yea Iโm with Ben on the imperfection of code coverage as a metric. I set up Istanbul/NYC to ignore files that are just wrappers on API calls, files with constants, and any file where the type system is doing all of he heavy lifting. But honestly, I donโt use code coverage as anything more than a fun thing to watch on the build. Instead, I use the number of bugs that pop up in a given sprint as my measure of quality. And if a bug occurs then I know that my code coverage was low and/or that our tests were flaky. So I immediately go beef up the tests. :)
There is no good answer, it's a vanity metric.
Test coverage tools are a good way of understanding what parts of your code are untested, that's all.
Dont get me wrong, I love tests! But coverage doesn't tell you the value of your tests in respect to giving you the ability to change and refactor your system with confidence. Unfortunately there's not really a number that can be assigned to that.
Each of my tests enforces its own independent coverage rules for what code it needs to cover, and it needs to cover those things 100% or the test fails. This allows for strict coverage where you want it, but leaves all of that up to the judgement of the test writer. So some tests are not required to cover anything, while others might be required to independently achieve 100% coverage of multiple classes under test.
That's a tough question.
If you don't have a starting point / first target I generally would aim for something around 70 - 80%.
But that's just a starting point. The answer totally depends on a lot of circumstances. For example: At work, we still have a few old but quite big services (written in C# / .NET). They have a pretty high test coverage (around 90% and more) because the boilerplate code / standard setup code makes just a few percent of the entire service. All the new services are microservices and contain a lot less business logic. Relatively speaking, the project contains much more boilerplate code. And of course, we don't test the framework. We just want to test our business logic.
So in the end, there is no real answer to this. Find out what works for you. If you find most of the bugs with code coverage of 30% (or maybe even less), that's awesome. Don't shoot for 70-80% just because some guy told you in a comment. (Of course, I'm referring to myself and the first part of my comment ๐)
If you want to use any metric in let's say a SonarQube or something like that, my suggestion is to set it to "the code coverage should not drop". So most of the time the coverage should go up or stay the same. But again: this should not be a hard rule. There will be times when you can happily delete an old, well-tested service because it is no longer needed. Commits like that will make your coverage go down. But you need those commits. Deleting code is great. ๐
I use Mutation Testing as my metric. Thats far better than code coverage. For .NET Iโm using stryker-mutator.io/