DEV Community

Dwayne Charrington
Dwayne Charrington

Posted on

A Hitchhikers Guide To Unit Testing On The Front-end

Prior to authoring this post, I had spent an entire month at work solely dedicated to writing tests. This is an unprecedented investment I have not really seen at any other place I have worked.

In that dedicated month combined with my prior testing experiences, I had a few epiphanies and learning experiences that I had to put into written form. A lot of developers want to write tests but rarely get the opportunity to learn the art of testing nor the time to write comprehensive tests.

I have always been a huge proponent of testing. Sadly, I have also experienced what it is like arguing for tests and not really getting as far as I would have imagined in the process. The value of tests is undeniable. Anyone who tells you that writing tests are a waste of time doesn't know what they're talking about.

Having said all of that, this post is not going to be an article telling you how to get your boss, stakeholders and team to understand the importance of tests or convincing them to buy-in. I am making the assumption you are already writing tests (either permission or no permission) or you're about to start writing tests.

You might have one or more of the following questions once you start to dig deeper into the world of testing:

  • Where do I even begin when writing tests in a preexisting application, especially a large one?
  • Is Test-Driven Development (TDD) something I should aim for?
  • What is considered a good test?
  • Is it okay to have large test files?
  • Should I be writing end-to-end tests as well as unit and integration tests?
  • Do I have to have 100% code coverage, what percentage of code coverage is considered enough?
  • How do I deal with external dependencies and API endpoints?
  • When should I use mocks and when should I use real code?

A Few Words About Test-Driven Development (TDD)

In an ideal world, we would write our tests before writing our code. Test-driven development is a tried and test technique that promotes writing your tests first and then writing the code to make those tests pass.

The idea behind this approach is you write code that is simplistic, easy-to-read and results in code that requires little to no refactoring. Moreover, the reality of TDD is that it is rare you will get a chance to do TDD in a consistent manner.

One of the biggest downsides to TDD is that there is a time and by proxy, monetary investment. It takes longer to implement a feature if you're writing the test first and then the code, it also might not align with some methodologies like Agile (and popular variant Scrum) which assign points to tasks in a boxed time period of around 2 to 3 weeks.

TDD requires work to be scoped and finalised

Even in workplaces that prescribe to a methodology that promotes scoping work before it is started, we all know this is not always the case. TDD requires the problem you're coding for to be completely scoped and then finalised.

If a specification is changing or the requirements are also changing, it will require rewriting your tests and possibly get you into a situation where you have tests, but nothing you can show (except some green lines in a terminal window).

The only language most stakeholders (management, customers, etc) speak is deliverables. Have you delivered what was asked for? Are a bunch of test code and no code being tested deliverables? In their eyes, it is not.

TDD requires buy-in

The benefits of TDD far outweigh the negatives, but getting buy-in from stakeholders, especially stakeholders who are not "tech-savvy" can be difficult. I have also worked with my fair share of developers who are of the opinion TDD yields very little benefit over testing after development (TAD).

Even if you can get management and bosses to buy-in to TDD (perhaps they were or still are developers) you have the task of getting your team to also be on board which isn't always the easiest thing to do if they have differing opinions on the matter.

If you're having to fight your own team or convince them, you've already lost.

TDD requires discipline

Even once you have managed to get people to buy-in to TDD and have convinced them of the benefits, the reality is a lot of developers have no experience in test-driven development. It's a luxury not many developers have been afforded nor asked for.

If your team is mixed; juniors, intermediates, seniors and principle level developers, the learning experience is one thing, but the discipline aspect required by TDD is another.

For developers however experience who have not been subjected to TDD before, it will be intimidating for them. It's not like getting on a bike and learning to keep your balance.

I have seen experienced developers (10+ years) struggle with TDD because it's a complete and total shift from what they're used too. If you're set in your ways or used to doing things a certain way, old habits die hard as they say.

Usually, developers at the top and bottom are the ones who struggle the most with TDD. Experience and inexperience can be a blessing and a curse.

TDD is great, but...

You're probably not going to get to do it. That is the plain and simple truth, unless you're fortunate to work somewhere that does it, or you have an enthusiastic team that has managed to sell it to management, you're not going to get to do it (at least not properly).

I definitely implore you to try it out in your own personal projects, it's something you should get a taste of, even if it's not on a large team-based project. But just know, you're probably not going to get to do it at work.

Going forward in this article, we are going to make the assumption you're writing tests as you go along or you're testing an existing codebase that has many parts already built and you're retroactively testing them.

You Don't Need To Aim For 100% Code Coverage

A long long time ago in the world of testing, code coverage used to be a metric that was put up on a pedestal alongside other metrics which we have since come to learn do not matter or are inaccurate.

When you write tests just for the sake of trying to get 100% code coverage, you're ignoring one of the biggest benefits of tests by making your tests cover your code instead of thinking about the code itself.

Code coverage is a bit of a mirror trick. It provides the illusion that by having code coverage 100 or close to 100%, that you're covering all of your bases and strengthing your application. Wrong.

A good test aiming to cover 100% of your code is not only a waste of time, but you could also be testing bad code that needs to be refactored. You should never try and cover bad code with good tests. Sometimes you only know code is bad once you've written a test for it. Chicken and egg.

Sure that authentication file which handles logging in users, creating JWT's and other facets of auth might be completely covered, but if there is bad code in there you're testing, all you're doing is making sure that bad code works how it is written.

In most cases, I find 70-75% code coverage is the sweet spot. Sometimes code is so easy to test, you end up hitting 100% coverage without really having to try or think about it.

I Have An Existing Codebase, Where Do I Start?

In my situation, I had a codebase that was two years old with zero tests. Because of time constraints and an ever-evolving specification (user focus groups, stakeholder and customer feedback) test-driven development would never have been an option.

Even if we were to write tests, they would have become outdated or redundant quite quickly. For the first year, features were being added, removed or completely changed as testers and users provided feedback and we iterated.

I found myself in a situation where I was left scratching my head, where do I even begin and what do I test first?

It is tempting to go straight for the low hanging fruit, picking off some of the easiest parts first. But, the reality is those pieces of low hanging fruit being tested would have yielded very little benefit.

For example, we have an accordion component. It's simple in that you give it a title, think of an FAQ question screen where each FAQ is a question that can be expanded to show an answer.

The user clicks the heading, the answer is shown by expanding the box beneath. This component has a few options such as allowing you to group items together so when one is shown, the rest are collapsed or allow all accordions to show and hide independently.

This accordion component is not crucial, it is used in a few places, but not as many as other components are. Writing tests would be easy, it would bump up our code coverage numbers, but would it make me sleep soundly at night knowing this component is tested? No.

Worst-case scenario, if that accordion breaks users won't be able to read faqs. The application itself will still be functioning, users can log in and logout, interacting with other parts of the app mostly without issue.

Complexity !== Importance

Now, you're probably thinking that instead of going for the simple code you should audit your codebase and look for the biggest and most complicated pieces you can find and start there. Hold on, wait a moment.

The complexity of your code can be a red herring.

Sometimes complexity can be a sign of poorly written code, code that needs to be refactored and broken up into smaller pieces. Code that is hard to read and doing too much is a code smell.

It just so happens that bad code is a great candidate for a test. Using tests you can refactor that bad code into something better (which we will get into later on).

For your first few tests, I would not recommend going for complex code that needs to be refactored. While tests will help you do this, you want to aim for something more tangible that instantly pays itself off the moment you write a test.

Once you get your testing mojo, you will get more confident and be able to tackle those slightly harder to test parts of your application. Refactoring requires strong tests, a place where code coverage can help.

Prioritise Your Tests

An application is broken up into three categories. Non-essential, essential and critical. If your application is an online store, the non-essential parts might be tooltips showing on forms or animations on your modals. The essential parts might be image galleries for products, the ability to add them to a wishlist or ability to track an order using an order number.

The critical parts of your application would be a lot more serious. The ability to add an item to a cart, the ability to see your checkout, the ability to enter your payment details and place an order. For an online store, users being able to make purchases is absolutely crucial.

Your first few tests should be testing critical parts of your application. The kind of parts that you know if they fail, the business gets hurt. Examples of crucial areas to test include;

  • Any code that handles payment information
  • The ability to log in or log out (in apps with authentication)
  • Code that handles keep track of what items a user has put into their cart

Endpoints and API's

Inevitably, you will encounter a situation where you need to test some code that makes an API request to some kind of endpoint. It might be an authentication server, it might be a call to load some products for the products page. Whatever it is, you will have to write tests.

I have seen some people write quasi-integration type tests where they will actually make real API calls to a staging database comprised of non-production data. And hey, in some cases it works.

But, I don't recommend in anything but an end-to-end test allowing real API requests to be made. If you're unit testing a function that loads products from an API, use mocks.

Tests need predictable data

The biggest disadvantage of relying on any kind of server or dependency that goes beyond the scope of the test is they cannot be controlled.

  • What happens if the API goes down?
  • What happens if the data changes?
  • What happens if the backend team deploys a schema update and breaks the structure of the data?

For these reasons (and some others probably not mentioned) dealing with real data in tests is a recipe for failure. You should always be relying on mock data, the kind of data you know never changes. Tests are about predictability, inputs and outputs. If you're passing in data expecting a certain result and it changes, the test will fail.

Mocks, Stubs, Libraries & The Curious Case of Third-party Dependencies

Much like code that makes API calls, you will encounter code relying on third-party dependencies. Some of my most recent library encounters and tests have been MomentJS and Lodash.

Here is the thing with using external dependencies, if you're using something like Jest, they will break. Because Jest does not operate within the confines of a real browser, things can get messy really quick.

The lack of proper support for dates in a virtualised browser environment when testing with something like Jest is also a problem. This is where mocks come into play and if you're using Jest, its support for mocking/stubbing dependencies in your application is world-class.

Fortunately, if you use Jest, there are many community authored mocks and libraries which add in support for mocking browser API's and libraries like Lodash.

What constitutes a "good test"?

This is the million-dollar question. A good test can be a lot of things. But really what I personally believe constitutes as a good test is first and foremost, how easy it is to read.

One thing I like to do in my tests is to use comments explaining what I am testing. Yes, in most cases if you're testing good code, it should be clear. But, I find that comments explaining what I am trying to do are useful, especially if I have to revisit large tests later on or for other developers to read.

Repetitive code should be abstracted. Sometimes you will have some code that gets reused throughout different tests. You could duplicate it, but I find that that repeated code should he moved to a function. Case in point, a function that is responsible for staging your component, that's a great candidate. It makes your life easier if you have to change it later.

Last and not least, the most important thing about a good test is not blindly following the code. Throughout that month I spent writing tests, I encountered several instances where the code was really hard to test (side effects, too tightly coupled). I could have hacked my way around those issues and got the tests working, but it would have been the wrong thing to do. A good test doesn't test bad code, it improves.

What I did was then refactor the code in question, until I got it to the point where it was no longer difficult to test. The end result was much easier to read code, and fewer lines of code in the app overall (win-win).

It's okay to have long tests, but...

This is another one for the question pile: is it okay to have tests which are long? I've seen tests upwards of thousands of lines of code and the answer is yes, but.

A large test can be a good indication the code you're testing needs to be broken up. Unless you're testing code that has many different edge cases and flows, there is a good chance your large test is alerting you to the fact the code you're testing is too tightly coupled or not broken up enough.

Sometimes a test just ends up being large because you're thoroughly testing all branches and statements, you shouldn't obsess over irrelevant metrics, but you shouldn't ignore the warning signs of code that needs to be changed.

Conclusion

Many would agree that having tests is better than no tests. There are a lot of opinions and a lot of confusion surrounding testing on the front-end. Educate yourself, but don't blindly follow the advice of one person on the subject.

Latest comments (1)

Collapse
 
kimsean profile image
thedevkim

This is true, I don't really like companies looking for a developer with

TTD expertise and Agile

I mean like.. It puts a lot of pressure doing TTD with Agile specially when you are just given a limited amount of time.