DEV Community

Cover image for 10 Tips For Writing Better Tests
Maxence Poutord
Maxence Poutord

Posted on • Originally published at maxpou.fr on

10 Tips For Writing Better Tests

#1 - Think documentation

Your company wiki can be outdated. Tests can't.

Everybody wants good and well-written documentation. Unfortunately, it's hard to keep it up to date.I believe tests can replace, somehow, a part of the documentation. Tests provide a good description of how your application should behave.

Let me show you an example I could write:

// Shop.spec.js
it('should show list of product', () => { /* ... */ })
it('should hide non-available products', () => { /* ... */ })
it('should display a discount label when available', () => { /* ... */ })
it('should paginate when > 10 products', () => { /* ... */ })
Enter fullscreen mode Exit fullscreen mode

Just by reading the name of the name of the file and the 4 descriptions, I already know what is this functionality and some business rules.

Also, if the "should hide non-available products" test breaks, I know exactly where to start my investigations.

#2 - Isolate your tests

side effects

To better organise your test file, you can group related assertions under the same test (= the same it()/test()). Try to keep your reasonably small. Smaller tests are easier to understand and easier to maintain.

Sometimes, we need to test a full process. Like a payment flow where you have a few different steps (i.e. payment info > user address > confirmation).

If you hesitate between creating one big test vs one test per step, where "step 2" depends on "step 1", go for the first option. It's OK to have some longer tests.

Sharing a state between tests is a bad idea. They're hard to debug and soon or later, you will face weird issues caused by side effects. Always keep your tests isolated. Tests should not depend on each other.

#3 - Keep it flat

I tend not to write any condition (if/else/...) in my tests. If your test has an if condition, maybe you are testing 2 different things. Which means, your test may have too many responsibilities.

Remember the S of SOLID (S = Single-responsibility Principle). If it makes the coffee, toast your bread and fetch the forecast... there might be something wrong.

#4 - Only mock what you can't control

Every time we mock, we diverge from the real world scenario.

For this reason, you should avoid them as much as possible. But, sometimes we have no other option but to use it. Here are the only exceptions:

  • for external API calls (HTTP GET/POST/...);
  • for browsers API (local/session storage, navigator...);
  • ... and time-based stuff (date, random methods...).

Talking about time, if you're testing a timer, don't block the pipeline for the x seconds required by the timer. Mock the clock!

#5 - Avoid assertion in loops (forEach/...)

A common error is to test a list in a loop.

Example: I want to test a page that loads 30 items.

import fakeDataProducts from '../fixtures'

it('should shows list of product', async () => {
  const wrapper = await render(ProductList)
  fakeDataProducts.products.forEach(product => {
    expect(wrapper.find(product.name))
      .toBeInTheDocument()
  })
})
Enter fullscreen mode Exit fullscreen mode

What's wrong here?

  • Too many assertions : multiple assertions can potentially slow down the execution of the test suite.
  • Non-valuable tests : asserting the presence of one item can be a good idea. But, testing the presence of each item doesn't seem to add much value.

Exception : When each element of the array represents a different use case.

#6 - Test your app in the same way as a user will use it

The more your tests resemble the way your software is used, the more confidence they can give you.

β€” Kent C. Dodds

Let say you want to login on your favourite social network. You fill your username and your password. And after that: do you search in the DOM for the button with id="login-form-btn"? or do you click on the button called "Log In"? Why do you test differently?

If the keyword "Log In" is already present on the page, you can query your item with accessibility attribute (i.e. aria-label, ...). In this way, you will enforce your component accessibility.

#7 - Favour integration tests

You probably read Martin Fowler's blog talking about the Testing Pyramid. In a nutshell, this blog said: write a lot of unit tests (fast and cheap), some integration tests and a few end-to-end tests (slow and expensive).

But, this post was written in 2012. 8 years ago! I believe we play a different game today with different rules too.

Unit testing a Redux/Vuex store gives me zero confidence. I've done it in the past. Many times... but, the number of bugs haven't changed. Why? Because bugs and regressions are generally not in small pieces of code.We found them at a higher level.

integration tests

(Unit vs. Integration tests)

Integration tests are more expensive than unit tests. But, it's worth the trouble.

#8 - Avoid implementation detail

Let's say you want to test a "like button". You might be tempted to do something like this:

const wrapper = mount(<LikeButton />)
wrapper.find('Like').simulate('click')
expect(wrapper.state().liked).toBe(true)

Enter fullscreen mode Exit fullscreen mode

What's wrong here?

  • False positives: (aka false alarm) It looks like an error but it's not. Let say, you want to refactor the component. By refactoring, I mean changing the implementation, not in the behaviour or the contract (props, events...). Otherwise, it's not a refactor. If you rename liked to isLiked, the test will fail. And it should not. We write tests to fail when something goes wrong. Nobody cares about a variable name! But, it's a different story when it comes to behaviour. Only test what matters!
  • False negatives: (test pass but it shouldn't) Instead of verifying the output, we check the internals. We can't be sure the output still works as expected.This scenario can happen because we did not test the app in the same way as your final user will. In other words, our users and other devs who are going to use this component don't really care about the magic inside the component.

To avoid implementation detail, treat your component as a black box: only test the inputs and the outputs.

component black box

The same test, without implementation detail could be:

const spy = jest.fn()
const wrapper = mount(<LikeButton onClick={spy}/>)
wrapper.find('Like').simulate('click')

expect(wrapper.prop('aria-label').toEqual('liked')
expect(spy.mock.calls[0][0]).toBe(true)
Enter fullscreen mode Exit fullscreen mode

#9 - Always green

Bender kill all human

I remember when I joined a company a few years ago. On Day One, I couldn't start the application. After investigations, it turns out someone had forgotten a ; in a SQL file and pushed it to master. The application was broken because of a god damn semicolon. With basic tooling, those kinds of problems should never happen.

If you ask a dev to run the run manually the test before opening a pull request, it's not gonna work. Humans are fallible. Plus, this is a dumb job. Better delegate this job to a robot, they are cheap and never lie. Automated jobs will ensure your tests are always green.

Because I don't even trust myself, all of my side projects have CI enabled by default. The only way for me to merge my own code is to open a pull request and get a green build.

#10 - Write tests for confidence, not for metrics

"When a measure becomes a target, it ceases to be a good measure"

- Goodhart's law

I see a lot of people writing Coverage-Driven Testℒ️. Because their manager asked to, or because they want to show off with a "100% code coverage" on the GitHub repository.

In my opinion, this is a really bad idea. And for a few reasons:

  • Code covered does not necessarily mean code tested;
  • High code coverage gives the illusion of quality. But, you can have a high percentage with irrelevant tests that does nothing except bumping the score (cf. implementation detail);
  • Dev doesn't write tests for being confident but to satisfy a metric.

This last point is very important. By taking the problem on the wrong side, we forgot why we write tests.We should all write tests for confidence. Confidence for shipping code that works as described and also confidence that the behaviour will remain unchanged 6 months later when you or a colleague will work on this part of the application.

Not all code needs tests. Expecting the same amount of test in your App hidden settings and in the payment page is absolute nonsense. Writing tests is an investment. If tests don't provide any visible ROI (Return On Investment), skip them. Except, if you're working on a library (open source or not).


And you, what's your golden rule for testing?


Thanks for reading this article 🀘. I hope you found it useful! If you liked it, please give it a ❀️ or a πŸ¦„! Feel free to comment or ask questions in the section below or on Twitter (@_maxpou).

Originally published on maxpou.fr.

Top comments (5)

Collapse
 
mikie profile image
Michael Smith

Great post. I used to be guilty of writing coverage-driven tests because that's what my manager wanted (really bad manager). While writing them and running Istanbul, I saw coverage increase but the test quality didn't make sense to me. He just wanted to see green numbers.

Collapse
 
maxpou profile image
Maxence Poutord

Hey Michael, glad to see I'm not the only one thinking this :)
Out of curiosity, how did this project ended up?

Collapse
 
patryktech profile image
Patryk

Great post.

I don't necessarily agree with the integration vs unit tests - it all depends on context.

If you use functional programming techniques, or write a library, you'll have a lot more unit tests, and that's great. If you do web development, then it makes sense to focus more on integration testing, as the framework you're likely using will have its own unit test suite that checks the single units, and you want to test your own code, not theirs.

Collapse
 
maxpou profile image
Maxence Poutord

Hey Patryk, thanks for the feedback.
You right. I wrote this in the context of a web application. I should have mention it :)

Collapse
 
drm317 profile image
Daniel Marlow