DEV Community

Cover image for Why do we Test?
Gabriel Abud
Gabriel Abud

Posted on • Updated on

Why do we Test?

Why do we test?

As I write tests way past the work day and fight with Jest, mocks, and React components, I find myself ruminating over this question.

What is the point? Is it to meet some arbitrary metric (90% or 100% test coverage) placed on by managers or senior engineers? Is it for the user, so they have a better user experience and less bugs? Is it for yourself, to feed some narcissistic tendencies and grandiose ideas that you're the best programmer in the world?

Am I the only one that gets a dopamine rush from these little green dots?
Alt Text

Ultimately the most important metric for a business is how much money is saved by writing tests. But how many of us, as engineers, really have a good understanding of this?

Developers don't care about this. Not only that, they are so far removed from the money making aspect of a business that it's impossible for them to have a good understanding of this. A project manager may have a better understanding, but they're not the ones writing the tests. Nor will they have a good enough understanding of a codebase to be able to tell a developer what to test. The people in a business that understand the true cost of things don't understand the software and how it's written. This is one of the conundrums of labor specialization. We become experts in our domain but in doing so miss the bigger picture.

So as narrow-minded engineers, we need a better reason than "to save money". Something that we can understand and relate to, while not being too constrictive.

We should test to save developer time.

Hear me out. Developer time is something we have a good understanding of (some of you may scoff at this, I know). Developers understand what features are likely to break and how long things will take to implement. Your team's time is not free, so in most cases it is pretty strongly correlated with saving your company money anyway. Testing, in essence, is an investment in your team's future development experience. Saving developer time is ultimately the principle around DRY programming, extreme programming (XP), and SLURP programming. Okay I made that last one up, there are too many stupid programming acronyms.

Is it worth your time?

Our own time also holds up better as a metric in different situations and company types. Facebook and Google will have drastically different use cases for tests than a small startup getting off the ground. A breaking feature in production for www.facebook.com is likely to set off a wave of alarm bells that results in a lot of work for developers (aka $$$). End to end tests for a product that is used by millions of people is therefore much more crucial than one used internally by a handful of employees.

But how does prioritizing developer time help us to actually write better tests?

Let's go over the different kind of tests and why this way of thinking can help you:

1. Unit Tests

These should be the quickest to write and should give us assurance that the individual pieces of our system work as we intended. Ultimately these should run quickly, test your own code (not 3rd party libraries), and serve as documentation for future developers. They save developer time by facilitating refactoring and helping onboard new team members. When an integration test inevitably fails, it is likely that a unit test can tell you exactly where and why it failed. Writing around a test interface also promotes good code practices, like using pure functions and dependency injection.

Unit tests should be quick enough so that you can use tests to drive development (see TDD).

While you can and should have unit tests for both the frontend and backend, I believe they have most value in your backend business logic.

2. Integration Tests

These test how things interact within your system. Integration tests save us time by preventing common use cases from breaking as we refactor. I tend to think of these as more frontend-leaning tests, although they can also be on the backend. They are also much quicker than manually clicking through multi-step forms, assuming they are well written. Integration tests may still use mocks and give us more assurance (per unit of time spent writing them) than unit tests that our system works as the user expects.

3. End to end tests

These test how your system interacts as a whole. A true end to end test does not have any mocks and is running through your software in a way that real users might use it. These have the most value but are also the most complicated to run and take the most time. End to end tests save developer time by preventing after hours calls about how billing is down for the entire company. Maybe your TLS certificate expired or your Single Sign-On provider is misconfigured. Dammit John I told you not to touch those settings.

Are there any bad tests?

This is not to say that all tests are good. You have to keep an eye out for bad tests too, the ones that take up developer time.

Examples of this are tightly coupled tests or ones that care too much about the implementation details. You should constantly be asking yourself, what am I trying to achieve with this test? Am I testing new business logic, which is prone to human error and refactors, or am I testing how an existing library works? You don't need to test React, Flask, or Django, there are already thousands of developers who have done that job for you.


"Because Dan Abramov said so" is not a good testing philosophy

If a test is going to take you a couple of days to write, is already mostly covered by simpler tests, and does not cover realistic use cases, it's a good sign that it may not be necessary.

Likewise, a test that takes several seconds to run because you didn't mock some expensive 3rd party function is going to cost time for every developer. It may make sense for you as a sole developer to write this test, but now multiply the seconds that test takes x the number of times each developer runs the test suite in a day x the number of developers at your company. It quickly adds up.

If your tests are written in such a way that every little change to the codebase requires refactoring a test (or more than one) needlessly, it is definitely not a time saver. This is my problem with Snapshot testing. These kind of tests make us feel "safe" but they are not actually saving any amount of time or making our code any less error prone.

I think what Guillermo is getting at is that tests can get messy and a few well thought out ones will give you most of your results. Tests, like software and many other fields tend to follow the 80/20 principle. 20% of tests will end up giving you 80% of the results. Don't just mindlessly write tests for the sake of writing them, to reach some arbitrary coverage number, or because you saw an image of pyramid that tells you how important unit tests are.

Testing Pyramid
Take these diagrams with a grain of salt

Instead of asking fellow engineers to always write tests, make sure they understand why they're writing them. 100% code coverage does not literally mean it is 100% safe, you could have a bad test that will never occur in reality and is considered to have 100% coverage. From personal experience, not enough time is spent talking about what good tests look like.

So besides awareness, how do we use this concept to better our development? Through consistent reviews and reflection on the tests we write. Is a piece of code creating a disproportionate amount of maintenance work for you? Maybe it's time to refactor and write better tests for it.

Rather than focusing on unhelpful metrics or rigid rules, testing should be treated as a continual learning/betterment process. Tests should have the necessary attention paid to them and not be treated as second class citizens. Everyone should spend time refactoring and reviewing them, discarding ones that are not testing essential pieces of your software, and improving slow ones. Scrap your next stand-up and instead have a test-up, it'll be a much more productive use of developer time.

Top comments (0)