During my 10 years career as a software developer, I had the pleasure to work with many different companies and on even more projects. Those were in different fields from the news sector, gaming, and education to the finance sector where I currently work. And every single one of them had one thing in common. In the interview, and when onboarding they all stressed how important it is for them to have good tests and that all code is tested. Also, almost every single one of them failed in this to a certain degree. In this post, I will cover the most common pattern I notice every day across the all projects I saw. Where tests, in my opinion, give false confidence in the quality of code.
Everyone starts a new project the same. Optimistic, and with good intentions. So where does it go wrong? Mostly it is in implementing ideas, theories, and/or restrictions that they learned in college or from some online article without really understanding them. I am not saying those are wrong practices but they should be understood how and why first. Some of those are test-driven development (TDD for the rest of this text) and coverage percentage.
Ok, so let's go back to the project. Often during the planning of the project or after some initial stage. Someone says: This needs to be tested properly to ensure the quality of our code. And that is a great statement with which I completely agree. But the keyword is properly. So the most common thing to do is to unit test everything and have 100% coverage. And some even start saying TDD. Still sounds like a good idea. But then it doesn't work. Why?
Let us start with the TDD. According to the TDD, you are first writing the test, and only then the code. Therefore, tests first fail, and then you write code that passes those tests. This helps you to write cleaner code and reduce duplication. Then again, out of all companies and projects I worked with, only one attempted to keep up with this. And still didn't stick with it completely. At this moment, I am aiming more at UI developers. I find this to be a bit easier to do in the backend with languages like Java. But imagine writing a test for some code that outputs some HTML, and you are testing if that output will have some class in a specific tag, text between them, or who knows what. You just end up throwing it out, writing the code, and then appropriate tests for it once you have a clearer picture of how the logic will work. But the problem, the moment you throw out the theory you are following, your tests become vulnerable. Because you are pretending to follow some rules, but in reality doing something else.
Ok, enough of TDD. While it is still mentioned, I hear about it less and less. But let us go to unit testing. There are many different types of tests for your code. Unit, integration, and e2e are some of them and all play their role in ensuring the quality and stability of your code. The most common ones always talked about are unit tests. And I see how wrong they are so often. No sugar coating. Plain and straight wrong. By definition, a unit test is a way of testing a unit. The smallest, logically isolated piece of code from the rest of the system. And here, the keyword is isolated.
I do agree, again, this is more common in the JavaScript world. Historically, it was much more difficult to test it. At first, the code was written in one huge file. It was nested inside of the functions so unreachable and later when we got modules, mocking imports was initially a tricky problem. Today, that is all pretty much solved. But code still suffers from some issues that make it difficult to test. Functions are often quite large and do many things inside. Therefore, developers end up writing tests for that function but also other modules it uses. They don't mock imported modules and functions are still nested inside of components if we are talking about something like React. Those same functions use variables from the outside context making them even more difficult to test.
This leads to the last common thing, and that is coverage. Many put the high requirement for coverage percentage. Often even 100%, and I will not say that is necessarily wrong, but often it gives too much confidence into your tests. More than it should because it says that a specific part of code was executed and not that it was tested. Think of a function that only has for loop inside running 50 times and doing nothing. Running that function in the test will increase coverage, but did you test it looped 50 times for nothing? Empty for loop might be a simple and stupid example, but let us go to the problem earlier with incorrect unit tests where they don't or can't mock other parts of the code. Just by running a test against that piece of code, it will report a higher coverage percentage because other parts of code it uses were run. And those may or may not be tested. And usually, you don't find that out in a good way.
Now, these were some situations where things are just not implemented correctly. But what does that mean? While things work, it means almost nothing more than false confidence. But when things go bad, it is a loss of at least time and with it money. You might not think much of that. But imagine the situation when you are working on some part of existing code, you change it, and you adapt the test for it. And suddenly things don't work. Maybe something else is breaking, some other unrelated test or coverage for part of code you didn't touch. You can't submit broken code, yet your task does not include this. Ideally, it is a small and quick change. But what if it requires more tests for that other part of code or some refactoring that takes time. You got to go in front of the team or manager in daily stand-up and tell them it won't take two days but four because someone didn't write tests properly? Are you going to throw your colleague under the bus and risk the conflict? The maybe worse situation is finding out some problem in the production and the manager coming to the team asking why if we have tests this happened? Many possible situations range from uncomfortable to very bad like impact on possible raises, project results, and team relationships.
And now for the conclusion. I am not saying you should not test your code, or you should not have a coverage report and requirement. My whole point in this article is to say, don't get too comfortable with those metrics and be aware of the quality of both code and test. Don't drop them, but don't get false confidence in it and have it all just to tick the box. Good testing can prevent bugs and improve the quality of your code. Bad testing can cost you time, money, and reputation in the long term.
For more, you can follow me on Twitter, LinkedIn, GitHub, or Instagram.
Top comments (0)