I have always thought that code coverage can be looked at in a similar way as availability According to the SRE handbook, a system should be as ava...
For further actions, you may consider blocking this person and/or reporting abuse
I would look at it at a different angle
Are you getting lots of defects?
Then your testing strategy probably isn't good enough.
Do you feel confident to refactor and change your system
Then your testing strategy probably isn't good enough.
Trying to aim for a number is not going to help you. Tests are a means to an end, measure those ends instead.
I like this idea. Aiming for a high coverage is good, but it's equally if not more important that your tests are flexible and assert the right conditions. Simply because your test has hit a line of code doesn't imply that it has meaningfully tested that line of code. That being said, there is no chance that a line of code is tested if it isn't covered, so any changes to it could result in defects or issues refactoring.
I like this idea. Many people get caught up in numbers game and may even loose sight of testing the correct thing. Measuring the end game is what's important.
Coverage is not a good metric if followed strictly. It ignores the significance of the code being covered, giving equal value to places that are complex and those that are trivial. It creates a bogus priotization for what to test.
In the general sense of the word "coverage" we do want "full" coverage. You should test all your code to some degree.
Here are some practical tips:
You're going to get diminishing returns after a point, so 85-95 strikes me as a goodish number but I think a quality can't really be measured too much by a number. It helps you get there but can't be the be-all-end-all.
Don't concentrate on coverage numbers. If your most complex, fragile and/or critical code path is covered with multiple good quality tests that's way better than if you've spent days eeking out some extra coverage for all your getters and setters and generated code.
What you're looking for with test coverage isn't a number, or completeness, it's a confidence to change stuff and know that if you break something, your tests will flag it up to you. So test stuff that matters and that is likely to be impacted by future changes.
I've got projects with 50% coverage that are tested awesomely and give huge confidence and change agility. I've also seen projects with > 90% coverage that break every release because most of the tests are worthless.
It's got to be a little al dente but still cooked through.
I think that high coverage is always a good thing, but since time is always lacking, the most important thing is that the main methods will be covered.
For example, if you have a code that is only 60% covered, but the "small" methods are not covered - that's ok, since the chances for bugs caused by these methods are lower, and even if a bug occurs - it's easier to find. Do you agree?
You also want to be careful of exactly what 100% means.
You can get 100% coverage by automatically generating tests. That doesn't tell you anything about the correctness of those tests, just that the current code passes.
I've actually realized a benefit to 100% coverage that you cannot get with anything less. If you maintain 100%, that means you will never introduce untested code. Otherwise youigbt select and add code, maintain overall coverage percent and no one will notice if unexercosed code is introduced.
Of course quality and fragility of tests are also important factors in successful tests. Meaningful assertions to utilize this coverage are also necessary or else you arent fully reaping the benefits of all this coverage. 100% coverage is generally coaxed out one way or another, be it in your automatic unit/integration/e2e tests, or through manual QA.
It definitely takes a lot of work to get here, and isn't recommended for time constrained projects. I do agree with the other comments here where it isn't necessary, but I did feel compelled to highlight the other side of the argument.
The way I look at it, with our huge amount of legacy code: Something > Nothing.
Also be careful because you can have 100% coverage with bad tests. This is not helpful either.
TDD - code coverage is greater than 100%! I have tests for production code I have not written yet. yeh right!