Let's face it, testing can be challenging for many developers. It is one of the things that we want to avoid. I certainly know that because I was also thinking the same way. But many challenges and setbacks changed my view on it. Below are some of the situations that I have experienced along the way and what I have learned from them. These lessons can help you on your path to becoming better at testing.
Before writing any test it's important to know why we write them. Kent C. Dodds has a great definition for that:
We write tests to be confident that our application will work when the user uses them. (source)
This is related to one situation at work. I took over a ticket and implemented the changes. Everything seemed to be working well and I pushed the code. After some time some tests in the CI pipeline failed. At first, I thought the tests were wrong, but it turned out that new changes broke another part of the app. That was the first time when tests got my back.
This experience taught me a good lesson - tests are worth doing. They take some effort when writing them, but in the future, they can save you from making changes that you don't intend to make.
This is the fundamental principle. When I learned it well, it helped me immensely.
When you test a part of your app, look through the perspective of a user. Think in terms of what effect your code has on the user. The same perspective applies when using queries in tests.
Here are some questions to point you in the right direction:
- What does the user see?
- How can he interact with the app?
- What effect does his actions have?
Hopefully answering these questions gives you a better perspective on the user flow.
From this point you can start writing tests for the common user path. Start with simple ones and progress towards more complex ones. The next thing is covering edge cases. Try to think about all the different ways that could go wrong and test them.
Both opposing views should help you test most use cases. This will give you confidence that your app is running as you have imagined.
Most probably you will get to a point when you have trouble writing a test. There are two reasons for this: you are not good at testing yet or you don't understand what the code does.
I remember when I tried to improve test coverage for an older part of the code. I quickly looked through it and dived right into testing. It didn't take much time until I got stuck. The bottleneck in this situation was my understanding of what the code does. I had to take time to thoroughly inspect the code. When I figured out what it does, I successfully finished writing tests for it.
So take time to understand the code before writing tests for a component or a part of an app.
This seems straightforward advice, but some developers find it hard to follow. Sometimes developers fall into the trap of over-engineering the code. We write the code in a too-smart way.
This can feel good at the moment but can hurt you and your team members in the future. The same thing can happen when writing tests. So we need to be aware of this phenomenon and try to actively mitigate it.
Here are some helpful tips:
- Test only one scenario per test: don't try to force many scenarios into a single test
- Copy the code rather than abstract it: Don't add any abstractions and if statements when they are not necessary. In that case, it's better to copy some of the code to have clearer tests
- Follow a clear and well defined structure:
- Arrange: arranging all the inputs, mocks, connections...
- Act: run the code
- Assert: set expectations, what should happen if the test runs correctly
This tool can save you time when writing tests. Of course, you need to know how to use it in order to unleash all of its power. It is especially useful in a large codebase containing old code.
Imagine you are testing an older component. You write a test that you think should work but it fails. You try some things and add console logs in different places, but you are still stuck.
In this situation, the debugger is your best friend. You add it to the executable part of the code and run the test in debugger mode. The execution of the code stops where you put debugger. At this point, you can see all defined variables, and buttons for step over, step into, and step out. Now step over the code until you get to a point where it unexpectedly fails.
Here lies the magic of the debugger. Right before the breaking point, you can inspect what is missing in the test. Did you forget to mock some function or method? What variables are missing? Add those to the test and try again. Sooner or later you should get it to run green.
A lot of articles advise that you: "Fail the test first, then make it green". This sounds great when you follow TDD (test-driven development), but the way I work it needed some modification. My definition is: "Make the test green, then let it fail, and lastly revert it back to green". This approach assures you that the test does what it suppose to do.
Following this rule helped me in some interesting situations. I remember once when I wrote a test that ran green. Following the rule I changed the expected case to be the opposite of the previous expectation. When running the test again it was again green.
You probably won't get to this situation many times, but I still find it valuable enough to follow it. The rule ensures that tests are working correctly and improve your confidence in them.
Thank you for sticking with me until the end. I hope that your thinking about tests has been challenged and that you have learned something new.
Share your experiences in the comments below. I would love to read what you have learned when testing.