I've wanted to write about testing for a long time - front end testing has been a bugbear of mine for years. But my opinions have changed constantly until recently.
I used to think "everything must have a unit test!" to the extent that I wrote unit tests for code that didn't even do anything. I wrote tests for dummy data even! But the point was that I didn't really give integration or end-to-end testing a second thought for a long time.
Then I started paying more attention to the testing pyramid, and it became a pointless balancing act of quantifying my tests. "This piece of code has 3 integration tests therefore it should have at least 15 unit tests and one e2e test" Again at the time this seemed perfectly reasonable - perhaps I couldn't see the wood for the trees.
Then I started thinking in terms of the testing trophy, where we think more in terms of how much effort should be given to each area, or how much confidence it gives you. In my experience it leads most people to write almost exclusively integration tests.
But recently I took a step back and it became a lot clearer than trophies and pyramids and coverage reports etc. Rather than setting out to write a certain number of a certain type of tests for a certain piece of code, you should examine the code and then think to yourself "what kind of test do I need?"
What kind of test do I need?
Branches
The main criteria I have for any piece of code is "does it have several branches?". If so, then I should attempt to write unit tests. Usually integration and e2e tests won't cover every if statement or error path, so a unit test is the best way to cover all branching options.
Dependencies
The second criteria I have is "how many other pieces of code use this piece of code?". If the answer is 1 then I consider whether it's worth testing it in isolation or if I can test it by testing the other code. For more than 1 we need to consider whether the code's coverage will potentially get lost.
As an example, let's say we have a ProjectDropdown
component. It's pretty simple, has some integration with your form library, has a hardcoded list of projects to pick from, updates the form when you select a value, etc.
However, this component is only used by your ProjectForm
component. So when we write tests for the ProjectForm, will we naturally be covering all of ProjectDropdown's usecases? If so then we'll probably just be repeating ourselves. We can probably get away with just testing the form instead.
Now, if ProjectDropdown has some different branches that won't 100% be encountered by ProjectForm's tests, then yes, it probably needs its own tests.
Confidence
My third criteria is "does a unit/integration test tell me anything about this piece of code?" Usually this is in terms of confidence I suppose.
The example
When I'm writing a component I usually split it into several pieces: a dumb component, a smart component, and the api layer.
So the dumb component probably wants a high level unit test. We want to confirm that the expected content is available, that interacting with elements emits the expected events, that the edge cases and paths are all handled.
The word "unit" is a bit muddy for components as you're usually writing your tests from a user's perspective, but you should also be trying to isolate your component as much possible
For the api layer, how much stuff does it do? Does it have much branching? For 90% of the time the api layer will be a single function that accepts some args, makes a fetch request, and returns the response. So all a unit test would tell us is that we're sending a request and we're returning the response. We could write an integration test but will that tell us anything more than an integration of the smart component?
You get the idea. A smart component would then have some integration tests that verify that it renders the dumb component, and it sends data to the api, and renders something else with the response data.
And then...
Another point I should make is that it's difficult to follow this method and adhere to TDD, as you have to have some understanding of your code before you decide what kind of tests to write. I find the key is to write some simple specs first that outline the general behaviour of the feature. You can refine and iterate on your tests as the implementation develops.
Top comments (0)