DEV Community

Roland Weisleder
Roland Weisleder

Posted on • Originally published at linkedin.com

Refactoring Legacy Code: Can We Trust Existing Tests?

Recently, I saw an interesting question on LinkedIn: "When making a moderately large refactor, should we trust the tests we already have, or should we drive our refactor with new tests?"

As an independent consultant who is bringing legacy systems into the future, of course this question is of great interest to me. First of all, in legacy code, tests do not always exist or the code coverage is not that high. My first step is always to verify the existence of tests and evaluate their coverage. It's important to ensure that all critical areas, especially those that will be refactored, are well covered by tests. If not, then adding or improving tests is essential.

Understanding the tests is as important as having them. If they don't accurately reflect the business requirements, I will then focus on refining them or creating new tests. This approach not only helps to understand the codebase, but also to build confidence in its reliability. In essence, getting to know the code through testing is important to me.

When it comes to the refactoring itself, the goal is to change the structure without changing the business logic. This is where the IDE and automated refactoring tools are invaluable. They're less likely to make mistakes than moving code around by hand.

So, to answer the original question: First, we need to have enough tests we can trust. Trust can be gained by having sufficient knowledge about the code unit to be refactored. This can be done, for example, by writing new tests. Automated tools help us to make the refactoring itself less error-prone.

This post was originally published on LinkedIn. Follow me there for more tips on working with legacy systems.

Top comments (1)

Collapse
 
phlash profile image
Phil Ashby

Given that almost all software engineering is refactoring legacy code (it's legacy as soon as it's shipped šŸ˜), I think this is an excellent question! A question that generalises from 'are legacy tests still valuable?' to 'are any of our tests still valuable?', particularly given the balance to be struck between the value of tests and the cost of maintaining them (in time and stress).

I'd be interested to know how you go about checking that tests reflect business requirements (notoriously slippery things!), as I think this is the hardest part of the problem.

<opinion>
Personally, I like to think of testing being in one of two categories: internal tests (typically unit tests, integration tests) that provide the local team with the confidence that their code reflects an internal design; external tests (typically user-acceptance tests, contract tests) that provide consumers of the system with the confidence that it does what they expect. Given these two categories, I often suggest that internal tests are transient, that is: once they are passing, they can be retired / removed (remember they are still in the change history, be bold with the git rm commands!), leaving only external tests, which must be validated with customers/consumers on a regular basis: this offers opportunity to remove features that are no longer used and their tests..
</opinion>