DEV Community

Cover image for Developing a Client-Side Testing Strategy
Gavin Macken
Gavin Macken

Posted on

Developing a Client-Side Testing Strategy

Why Test?

Testing improves the quality of your product and makes it easier to successfully scale your application by allowing for more developers to reliably make updates and push with confidence.

You know the drill: A user reports a bug, they get frustrated, perhaps they can’t use your application at all.

The bug is reported to the team, and because of its severity, it’s full steam ahead trying to resolve the problem. The sprint grinds to a standstill as your resources are sucked up and user trust erodes.

This makes things very stressful indeed for your committed dev team that has the responsibility to resolve it and not push up fragile code in the first place.

There seems to be no easy way of handling this situation. That’s why the best thing to do is avoid it by wrapping your code in tests that must pass before deployment, capturing these bugs before they wiggle up to the surface and see the light of day.

They allow us to find more bugs more easily when they do occur and eliminate worries when re-factoring, eventually leading to better code.

Release with confidence — particularly when deploying native applications, as these have a longer release process. When a bug is released, it's a lot of effort to fix.

The rationale for testing is solid. If you want to improve the quality of your product and need to scale, testing is required.

The Challenge

But testing is one of the more challenging tasks in front-end development.
First of all, testing is expensive, time and effort are required, and the increase in product quality can be difficult to measure at first.

Some well-thought-out manual tests could be performed and are quick to create. However, the effort needed compounds every time you release. While automated tests can take longer at first, they save time later. With that said, they still require maintenance.

We rely on developers to make a calculation, balancing the Effort V Impact equation: how much effort is required and the impact that effort has on a product.

Testing Best Practices

OK, so that's all some pretty high-level stuff. Although there is no definitive solution for your unique application, there are well-thought-out and popular testing philosophies and best practices that can help, and it’ll likely be a unique blend that suits your team’s needs.

The best explanation I’ve heard for testing came from Kent C. Dodds’ podcast, where a metaphor was used to describe testing to be like painting a wall.

If you have a wall with corners, obstacles, windows, and perhaps a door, just throwing a bucket of paint at the wall won’t suffice. You’ll waste a lot of paint on areas that didn’t need it and miss all around the corners and the door.

Instead, we need different brushes to do a good job on our wall. We need a roller to do the heavy work and achieve a lot of coverage, a large brush for the corners, and maybe a fine detailed brush for the areas around your window being careful not to splash paint on your mahogany window sills.

Cool, now let’s pick up our developer brushes.

Alt Text

I’m sure you’ve come across this famous triangle: the Testing Pyramid. These four layers are the front-end bible to shipping code with speed and confidence.

So what are we dealing with here?

  • Static types — A static type system and a linter can be used to capture basic errors like typos and syntax (think TypeScript).
  • Unit tests — These target the critical behavior and functionality of your application. Unit tests test the smallest unit of functionality (typically a method/function).
  • Integration tests — They make sure everything works together correctly in harmony. Stack Overflow user Mark Simpson wrote, “Integration tests build on unit tests by combining the units of code and testing that the resulting combination functions correctly.”
  • UI tests — Also called functional tests or end-to-end testing. This is click testing of critical paths in your application within a browser or device. It can be both manual or automatic.

Getting the Balance

You inevitably put most of your attention toward the base of the pyramid — the unit tests. These are the easiest tests to write.
We could create a lot of these quickly, but you could find yourself asking, “What is the advantage of these tests I wrote?”

These unit tests we created are great, as they improve our test coverage. But how much do you feel they improve your confidence that the app will perform in a real-world setting?
Here’s another great reference from Kent C. Dodds, who is essentially the JavaScript testing guru. This one particular line has stuck with me:

The more your tests resemble the way the software is used, the more confidence they can give you.
And that there is the challenge — the Effort V Impact trade-off.
Unit tests are great in that they can be done with minimal effort. They’re isolated tests, so they don't have the same upkeep that integrated testing or end-to-end requires. It's for that same reason that they can’t give you the confidence that end-to-end or integration tests provide.

Alt Text

Let's go back to that important question again: “What is the advantage of these tests I wrote?”

Kent Beck is the person credited with having rediscovered test-driven development.
I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence … If I don’t typically make a kind of mistake, I don’t test for it.*” — Kent Beck on Stack Overflow

I believe testing to perfection is a flawed approach. Rather, testing to confidence is a better way to indicate that you're doing enough tests.

Defining what confidence looks like will be entirely up to you and your team. For my team and our current size and situation, I promote that if something is very unlikely to fail, then I'm very unlikely to test for it.

Discussion (0)