loading...
Cover image for The myth of "never going back to fix it later"

The myth of "never going back to fix it later"

ben profile image Ben Halpern ・1 min read

There's a gained knowledge among senior engineers that if you take the shortcut now, expecting to "fix it later", you'll never fix it and it will live on forever.

There's a lot of truth to this in general, but I have an issue with it, having consistently overcome this throughout my work: It's only the truth if the organization does not value refactoring and slowing down the pace when needed.

Having a culture that values going over past work and improving on the shortcuts opens up an organization to make a lot of practical choices in the moment while knowing there will be an opportunity to fix what was broken later.

Knowing nobody will return to improve the code is a self-fulfilling prophecy. Engineers need to fight for a culture that allows for regular refactoring, cooldown work, and general maintenance in times when more information is known about the model we are developing for.

Happy coding ❤️

Posted on by:

ben profile

Ben Halpern

@ben

A Canadian software developer who thinks he’s funny. He/Him.

Discussion

markdown guide
 

Totally agree on that. If you have a team who care about what they are doing, they will refactor, except if the company create an environment where it's difficult: constant death march, short deadline for the "good stress", and so on.

If the team itself doesn't want to refactor anything, somebody should look at the hiring process (which is, I think, less than perfect in our industry).

 

Right on! :) I regularly remind my team that refactoring and bugs are a natural (and necessary) part of the development process.

Although there are exceptions, "you always throw the first one away" tends to be largely valid. The first attempt at solving a problem is often an effective proof-of-concept, during the building of which the team comes to better understand the hidden problems and snarls that will need to be solved. There's always a canyon of difference between "working code" and "good code", but it's hard to write "good code" for a problem that isn't nailed down beyond its original specification.

But once the first attempt is reasonably functional, or even if it hits an impenetrable roadblock, the team is in a better headspace to then go back and refactor it. Workarounds are replaced with better matching design decisions. Bugs are fixed as a side effect of improved architecture. Because the problem is better defined, so is the solution.

Even once a "good" version is shipped, the code is never truly "done". I really try to emphasize making design decisions that leave room for later improvement: DRY techniques, SOLID classes, separation of concerns, pure functions, loose coupling to implementation.

(And yes, it definitely helps to have a team that's willing to engage with this process. We've gotten better at finding those people in the last few years.)

 

Totally agree ;)

As you said, the knowledge you have of a problem will grow overtime. I think that's a major argument to come back to your past misconceptions and try to improve them. Delaying important decisions as much as you can is, I think, a good strategy too.

While it's true that "premature optimization is the root of all evil," there's certainly a lot to be said for leaving room for optimization.

In other words, while delaying important decisions, one should always be certain they aren't obstructing said future decision! It's a tricky balance, of course, but what isn't.

"premature optimization is the root of all evil" is more about performance.

I know, but the idea ports to other forms of optimization, such as scalability to a load many times greater than a realistic scenario, adding support for extensions to a single-purpose tool, and so forth. I see the Java habit of "always use double" even when the data will never need to store to more than single decimal place as an example of this, too.

 

If these problems could be fixed easily, the post like this would never exist.

 

Totally agree with this!

We just started a new practice on my team at work: 1 Week each month is 'Maintenance week'. This gives us time to go back and revisit the old code, refactor, and remove stuff that's no longer used. Obviously if a major bug comes in that week - it gets priority, but this has been a really great way to find time to 'fix it later'.

 

On a similar vein, I've set a new rule for tech debt:

  • When writing code, make it work, and commit.
  • Before raising a MR, clean it, and commit. If you see TODO in the file you're about to check in, do it (or delete it), and commit.
  • When reading an MR, if there's glaring bugs, pass back to the author. If you think the implementation could have been better, discuss it with the author. If the discussion results in something to do, the reviewer adds a TODO comment, and checks that in before merging. If there aren't at least 2-3 commits, ask the author why.

As a result, when we estimate work, we will now be taking account of the number of TODOs in the project, and slightly increasing the estimate because we know there's more work involved than simply what the ticket says.

I don't know how the experiment will pan out, but it's worth a shot in my book.

 

Also, there is not always a all-or-nothing refactoring and many things can be transformed with intermediary solutions. I think this is where a good CI pipeline with high test coverage is worth the effort as it allows to make many smaller fixes over time and detect any breaking changes.

 

The NASA project I worked on "went back to it later..." to turn a spaghetti codebase into an "MVC for the sake of MVC" undertaking that was, somehow, worse organized and harder to read than the spaghetti.

When things broke after Friday deploys (because of course we did them on Friday), my nagging about testing got "we'll come back to that later, we have too many features to ship" until something mission critical broke which could have killed astronauts and I stepped on toes to get testing in place.

 

The Cycle of Misery
(by Chris Raser)

The code is "too simple to test."

The code grows.
Each change is "too simple to test."

The changes start to interact, and produce strange edge cases.
Some of these edge cases are intentional, some are accidental.
Other code is built that depends on the current, undocumented behavior.
The actual requirements are lost to time.
The code is now "not really testable."

The code has become 2000 lines of if-else chains, copy-and-pasted code, and side-effected booleans.
It has half a billion possible states, and can't be understood by mere mortals, or refactored without risk to the business.

A new feature is needed.
A bright, shiny new bit of code is created, because adding anything more to the old code is madness.
The code for the new feature is "too simple to test."

 

I can agree to some degree. The company should make sure that there is time for refactoring, but it's also the duty of the developers to demand to take the time to do this. It's not something that can be applied externally, it has to be the team that fights for the time to do so.

Also, this needs at least a tiny bit of structure. If you do something and say "we'll refactor this later", it's probably not going to happen, as no one will remember all the point that should be addressed "some day". If we think something should be addressed as we have to implement a dirty fix, we create a regular development ticket that will be discussed and prioritized in the next planning. Therefore, I think there is truth in the saying beyond a self-fulfilling prophecy and it takes deliberate effort to address this.

 

I love (in the if I didn't laugh, I'd cry sense) reading code like...

// TODO: temporary hack.
// We absolutely MUST fix this after shipping v3.

And 3.0 shipped 26 years ago, on September 1994.

Not this particular comment, because it had been fixed a long time ago. But the comment wasn't removed.

Those TODO comments become even funnier when the hack has been expanded to, and has a multi-headed hydra of behaviors that are cemented in place. And the temporary hack still has the undesirable properties (say, a memory leak, or inefficiency, or only handles the "happy path" correctly).

Weinberg's Law: If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization.

 

Engineers need to fight for a culture that allows for regular refactoring, cooldown work, and general maintenance in times when more information is known about the model we are developing for.

"Just do it, all the time".

I have the feeling that I refactor code at least on a weekly basis, if not on a daily basis. Not necessarily, and probably even most of the time, within dedicated refactoring story but rather through my tasks or these which have been assigned to me.

Corporate might be scared sometimes (often) with the word "refactoring" ("I my gosh how much it gonna cost, again.").

Doing small steps within daily business, not having dedicated refactoring stories but rather being part of the development process and approach, is a win for everybody.

 

"There is nothing as permanent as a temporary solution."

The problem is not that nobody wants to fix it, the problem is about allocating time to fix it. The trick is to never explicitly allocate time to fix things. Make it part of something else when there is more time to do things properly. But always be out to try things right the first time, obviously not by giving "them" a choice between "good" or "quick".

The other thing is to never add "TODOs". Either do it now, or don't mention it.

 

I hate to admit I'm guilty of this. I need to advocate for establishing processes were we record these shortcut fixes and assign time to go back and fix them.

Thanks Ben! :)

 
 

I refactored 4 year ole Perl today.