We all need to embrace the bitter fact that we all, as developers, hated writing unit tests at some point in our career. While some of us still keep hating writing tests (which I don’t blame for), I kind of developed a weird interest in writing unit tests after working with a number of JavaScript (mainly Node.js) projects over the time. And many times, I have seen people arguing about the acceptable unit test coverage both in meetings as well as online developer forums.
After observing all those dramas, and after having terrible experiences myself throughout the years, I thought I should very briefly write down my two cents on writing unit tests with proper coverage. While I jot these down based on my experience in writing unit tests for Node.js applications, I strongly believe that these facts are universal for any type of application written in any programming language. And, I’m pretty sure you might have more experience than me regarding this topic, so feel free to let me know your opinions about this topic which certainly would help me, as well as the other readers.
This article was originally posted in:
100% Unit Test Coverage — Is that a Myth? | by Deepal Jayasekara | Deepal’s Blog
Deepal Jayasekara ・ ・
blog.insiderattack.net
Why do you need unit tests? Isn’t integration tests sufficient?
One problem with the unit tests is that if your unit tests are passing, it still doesn’t mean that your application will operate correctly. The reason is, as we all know, unit tests only stubs/mocks the dependencies and test the individual building blocks of your application. In contrast, “Integration tests” assert whether your application behaves properly once all those building blocks were put together. Then why do we write unit tests at all?? Why can’t we satisfy from integration tests?
We need to understand the purpose of unit tests to answer this question.
Unit tests are there to increase the developer confidence that whatever the change made in the future won’t break the existing functionality.
Can’t we get the same confidence level by integration tests? Not really.
Running integration tests is usually an expensive operation as it involves communicating with real or at least near-real dependencies. This is not something you can do every time you make a code change as it affects productivity.
Another reason is that it is extremely hard to reproduce and test all the execution paths including the edge-cases in integration tests whereas, in unit tests, it’s relatively easy to manipulate the execution path by fine-grained dependency stubbing to test those scenarios.
80% Coverage or 100% coverage
I have seen many times in many projects that people agreeing to 80% as a good test coverage number. I’m strongly against this decision because I still do not have the answers to the following two questions:
- How do you quantify the acceptable test coverage? Who and how can come someone up with an exact number?
- If 80% coverage is acceptable, which 80% of the code would you cover?
If you are a project manager, and you expect 80% of the unit tests coverage from your developers, you should know that they are going to leave the most critical 20% of the code untested and cover the most comfortable 80% just to make you happy!
In my opinion, the tests should cover as much of your code as possible and preferably 100%. Any piece of code you left untested can be changed by another developer at any time, leading to a potential break of the functionality going unnoticed.
However, as we all know, the test coverage is measured in multiple ways such as line coverage, branch coverage, function coverage etc. Obtaining 100% line coverage is not that hard. But, does 100% line coverage mean that the entire code is properly unit tested? This leads us to our next topic.
Line Coverage vs Branch Coverage
A line is considered to be covered if any of the statements in that line was touched during the tests. But if the code execution splits into multiple branches in a particular line, line coverage will not correctly cover all the possible execution paths. The execution paths , also known as branches are the different paths your application logic might take during the execution. For example, the following line shows a statement with two branches.
const result = isEveryoneHappy ? happyFunc() : sadFunc();
The above line is considered to be covered in test coverage tools if the code execution hit this line regardless of the value of isEveryoneHappy
. But depending on the value of isEveryoneHappy
, the code execution might take either happyFunc() or sadFunc() path which could probably result in two completely different outcomes.
Therefore, branch coverage is much more powerful and a more accurate representation of the test coverage.
Achieving 100% branch coverage is not that hard at all, given that you write your code in a testable way and use the correct tools at your disposal to stub the dependencies and make your code follow the different branches.
Last but not the least, always make sure that you have covered the most important assertions which are related to your application’s functionality when you write tests. 100% unit test coverage will not help you if you haven’t identified the most important functionalities that need to be tested. Once the source code is 100% covered and all the tests are properly written to assert all the required functionality, it’ll be a huge investment which will ease future development.
I hope I left you something important to think about when writing unit tests. Anyway, this topic is open to suggestions, and feel free to let me know your thoughts on this in comments.
Top comments (1)
My biggest issues with the 80%, or sometimes the notion of testing the most common paths only. Is that if/when something does go wrong, you have much less data on it then normal. With few interactions with those paths in your code, you get less debugging opportunities. And, when it goes wrong it will be harder to reproduce on your end.
I have been doing 100% statement and branch coverage for everything after our base PoC. That is, the first mini-release is fine at no coverage as it is used to verify this idea even holds water. And can be used to fine tune any CI/CD or DB infrastructure.
However, I think people confuse 100% with perfect systems. 100% is the baseline, there will still be bugs. And you still have to adjust over time.