Introduction
Every developer has been in that stage where he/she was pushed to just release the "new cool feature" for the product that is going to maximize the revenue for the company. Most of the times, arguments like: "the feature is working great, we did some manual tests", "you can always iterate", "it does not need to be perfect" and some other good ones come to the engineers and makes them wonder if shipping the code without unit, functional or e2e tests is ok. Well, if you would have started doing TDD instead, you were not be asking yourself that question.
TDD is hard... I spent too much time on it
Do you want to recognize how professional a software engineer is? Take a look over those PRs created by him/her self, can you see any tests? how much coverage is being added? does he/she complain about the value of writing tests?
If you are building a new feature, not writing tests is not an option, it is a must. That's a maturity sign. Even when you need to ship that feature as soon as possible or it is just a PoC (you can't count how many times those PoCs ended up in production).
But there are a few times when you are new to the codebase and you are asked to start building things on top. We have all been there, I know that feeling. You go to that page: thousands of lines with no sense, full of legacy code, no docs and of course no tests at all.
In that case, adding unit tests before doing anything, might be a challenge (of course you don't want to test implementation details, right?)
You may say: "ah! gotcha! where is your TDD now?"
Well, adding unit tests are not mandatory under those circumstances, writing functional tests could save your life! Detecting what needs to be covered becomes simpler, and of course, you can ask for help to people who have been there for a while: product owners, senior engineers, QAs, even the CEO!. This is your opportunity to add a lot of value: tests and docs. After finishing with your functional tests, you will be able to dive into the code and start refactoring the whole thing, and meanwhile, you could potentially find some opportunities to add unit tests.
Believe me, once you start with TDD you won't ever find any more excuses to not write your tests. You may wonder why they are so important:
- First of all, they allow you (or your future peers) to refactor the code you wrote.
- They will guide you to write more maintainable code. Remember mocking is a code smell
- They will force you to understand the feature first.
- They help you to catch bugs easily.
- They will cover your a**
Do I need 100% coverage?
You need 500%... and yet, it is not enough.
Remember it is not about matching a metric. If it were for that, you can just do snapshot testing for all your projects and 'et voilรก' ask for a raise or promotion.
Testing is all about making sure the value you are shipping to your users is the right one. Even when your code breaks on production, you don't have to worry about that, we are all human beings, just go to the code and reproduce the bug with a test (now you can ask for a promotion ;-)).
Tools for code coverage will help you to find those "spots" which may have a lack of tests and not covering them it is a sign of irresponsibility. If you don't need "that part of the code" / feature, then... just remove it, why do you want to have that in the code? KISS.
Do I only need integration tests?
Integration tests are more focused on the user's features. Let's say you want to write a calculator to solve linear equations. You will have some specs for:
- Solve linear equation
a * x + b = 0
You may say well, I can have some tests for them:
describe('calculator')
it('should solve linear equation a * x + b = 0 with a <> 0')
it('should solve linear equation a * x + b = 0 with a = 0 and b = 0')
it('should solve linear equation a * x + b = 0 with a = 0 and b <> 0')
the code would be something like:
add(a,b) return a + b
subtract(a,b) return a - b
multiply(a,b) return a * b
solve(a, b, c):
if a <> 0 return divide(multiply(-1, b), a)
if a = 0
if b = 0 return "identity equation"
if b <> 0 return "no solution"
now, you manage to pass your 3 tests and temptation will come to say: "let's ship it!" well... no.
How do you actually now that add
subtract
and multiply
work correctly? How do you ensure that future developers will understand how those functions need to behave? For example, there is a change from the requirements and you need to implement multiply
with recursive Karatsuba algorithm for large numbers, are you 100% sure you are not going to break anything else? (I won't ever be).
If you take a look, we have only 3 tests from those specs. Are you sure you covered all edge cases? Just... think about domain numbers... Integration tests won't save you from domain edge cases. Why would you add the following test?
describe('calculator')
it('should solve linear equation a * x + b = 0 with a<>0 and a is a real number greater than 0)
that test does not belong to the calculator integration test feature, why don't you just add your checks to the multiply
function? And if somebody tells you, you don't have to cover multiply
because it is one line function... just write the test for the sake of god and show your professionalism.
But my manager told me there is no time for tests
Show your manager the following video
Probably you are in the wrong place. Or maybe this is your chance to drive a change and "make an impact".
Believe me, those who are complaining about you spend so much time writing tests, are the first ones that will come to you and ask: "why in the name of god you shipped a bug"... but this is a different story.
As you may know now, writing tests is not about satisfying QA people, or to prove to other people your code works, or to show how many new lines you covered, or how many commits you can show on GitHub to get a promotion. Tests are here to help you and guide through refactoring, to tell you and your peers "don't be afraid of touching this code". Tests mean empathy.
Remember that: "is not about how many features you ship, but rather about shipping the right ones".
Top comments (0)