loading...
Cover image for Are old unit tests useful?

Are old unit tests useful?

cathodion profile image Dustin King ・1 min read

A little bit ago, I had some thoughts about old unit tests, which I posted on Twitter. Rather than restate them, here are the tweets:

Disclaimer: I'm not telling you to throw out all your old unit tests!

However, it has been my subjective experience that pre-existing test suites have caused me as much trouble as they've saved. On the other hand, maybe they prevented even worse problems I wasn't aware of. That's the problem with legacy code, though: you never know what it's okay to change.

Unit tests are often touted as a form of executable documentation. This is only true if they're written for the reader (or at least cleaned up/refactored for them after the fact).

What has your experience with pre-existing test suites been like? Have you encountered a really well-written test suite that lived up to the promise of "executable documentation"? Is there a better way to approach testing and existing tests?

Posted on Aug 27 '18 by:

cathodion profile

Dustin King

@cathodion

Python. Webdev. Music. Also, other stuff.

Discussion

markdown guide
 

I think that - as you've alluded - properly written tests should document a feature, so that as you read the test, you understand the 'what' and the 'why' of the system's behaviour.

Then, as you write new features, those tests are stopping you from breaking old features and regressing old bugfixes.

If the tests aren't doing that, they're perhaps not great tests!

Of your premises 1 and 2, I disagree with 1! I trust my colleagues tests; the effort I have to put in to reading and understanding them is worth it.

Of your outcomes a, b, and c, I've got a feeling b and c are true ;)

This is a thought-provoking post! Thanks for keeping us thinking :)

 

I find regression tests pretty valuable no matter who wrote them. Ideally they're pretty readable, and I find that to be the case with rspec in general. So for my life, I'm a fan.

One thing that helps is tools that audit the test suite for simple things like what is covered and how. If I can get a high-level glimpse of what the test suite is trying to do I have a better overall understanding of what is going on.

 

I work with a lot of legacy code that rarely has any tests associated with it. I often wish the original developers had written unit tests I could use, even if these tests had less than ideal coverage or clarity. The way it works now, I have to spend a lot of time trying to figure out stuff that a simple battery of tests would explain, especially now that almost all of the original developers are no longer around.

For my own personal code, I like having a unit test project where I can easily verify the changes I'm making don't break anything and regain understanding of what I had written. For example, just this week I dusted off a component I had written about 5 years ago to get some ideas for a new project that's in our backlog. Running through the tests allowed me to quickly become reacquainted with the code and how it worked.

 

I think an essential part of the red, green refactor cycle is at the refactor stage you should look at the test code too.

Too often people will happily refactor the production code (actually a lot of people ignore this too) but then leave the tests a mess.

Tests have a cost, as this article alludes. Therefore you should question them as you refactor and develop the system.

A simple question I ask myself when I am working in a particular test suite is

If I delete this test, am i confident that the system cant be broken by myself or any of my colleagues in future.

If I am confident, delete! Otherwise it has value, keep it.

Question the value of your tests and keep refactoring them

 

I think an essential part of the red, green refactor cycle is at the refactor stage you should look at the test code too.

That's a good point, and I agree. But when you're refactoring application code, you can rely on the tests failing if you break something (if you have good tests). But if you break a test, it might not fail. Do you temporarily break the app code again to make sure the tests fail in the same way?