DEV Community

Josh Branchaud
Josh Branchaud

Posted on • Updated on

Living Notes on Working Effectively with Legacy Code

This is a living document of my notes as I read through Working Effectively with Legacy Code.


Chapter 1: Changing Software

I love this phrase: "the mechanics of change"


Here are the four reasons the author lists for changing software:

  • Add a feature
  • Fix a bug
  • Improve the design
  • Optimize resource usage

One that I think is missing from this list is "remove a feature."


Feathers suggests that refactoring involves changing code that is backed by tests. Can you refactor without tests? What does that look like? Or maybe it is just called something different, such as introducing new bugs.


The big, unspoken part of changing code is that we intend to preserve the functionality of the vast majority of the code.

Most of the system's behavior will not be affected by a change. Can we identify the portions that are at risk when making a change? There is the behavior that we intend to change, and perhaps we know to observe some co-located behavior, but what else do we need to check? How wide do we need to cast the net when checking for regressions from our changes?

Chart of changing behavior

Chapter 2: Working with Feedback

Two ways to change a system:

  • Edit and Pray
  • Cover and Modify

"Working with care doesn't do much for you if you don't use the right tools and techniques."

There is always context to be considered when saying something like "right tools and techniques." The point still overlaps with my experience which has shown over and over that "Edit and Pray" is a pretty slow and fraught way to develop software.


With all the best practices, convention, and dogma that surround the task of testing software, we rarely step back and simply ask, "What would some tests look like that would cover the code and bring us into a tight feedback loop about the effectiveness of our changes?"

Tests as a feedback loop


Page 10, I love the way he calls attention to these two approaches. There isn't just, "I wrote some tests." The approach matters.

"Testing to attempt to show correctness."

vs.

"Testing to detect change." (regression testing)


Tests (regression tests) act as a software vise to hold our code in place so that we are in control of what we are changing.

We all know the feeling of trying to make a small change and feeling like it is cascading into other parts of the application. It is a feeling of not having control. This is one of the purposes tests can serve, to reign in that control.


Unit-level testing provides a tight feedback loop.

Chapter 3: Sensing and Separation

Page 21 has a great description of this scenario it is easy to find yourself in when writing tests. You try to instantiate an object that you want to test, but in order to do that you have to set up some other object which requires setting up another one and so on. Suddenly you find that you have most of the system/application in the test harness.

This is why integration tests and feature specs can be so hard to set up and write. I can think of many Rails feature specs that became too cumbersome to write because you had to do 30 lines of carefully orchestrated FactoryBot and ActiveRecord creates.


Tests shouldn't be really hard to write. Which is not to say that writing tests is easy. If a test feels hard to write, or if you're having to jump through several hoops, then something is probably wrong. Maybe one or both of these:

  • Scope of the test is too broad
  • The code under test is too complex, whether that be due to the logic of the code or the number of dependencies

This section uses some outmoded Java class and method declarations. It gets the point across, but I'd love to come up with a code example from a Rails or JavaScript codebase that illustrates sensing and separation.


Sensing - can you sense the effects of your calls to the subject/object? You need to be able to do this in your test in order to make an assertion.

Separation - are you able to test the subject/object apart from large sections of or even the rest of the application?

The book has a bunch of techniques in the back for dealing with separation. The main tool for dealing with sensing is fake collaborators.


Page 23, referring to dependencies as collaborators -- collaborators is a good word for all the different objects that make up our codebase because they all work together to provide some kind of software solution.


Unit tests aren't able to prove that your system works end-to-end. That's okay, we don't expect them to. They do provide assurances about a bunch of small parts of the system. They also help localize errors which can save a ton of time debugging and fixing those errors.

Page 26, "When we write tests for individual units, we end up with small, well-understood pieces. This can make it easier to reason about our code."


Any recent tests that I've written that use mocking have involved observing what method was called with what arguments on some object. This Fake Collaborator and Fake Object example involves setting up a sensing method (getLastLine) and then writing the stubbed method in such a way that it feeds what you want to observe to the sensing method.

I suspect that the mocking that modern testing libraries provide do essentially this behind the scenes, or perhaps just eliminate it.

I wonder if there is any benefit or tradeoff to using the the mind of fake object manual stubbing described here versus the mocking utilities that modern testing libraries offer?

Chapter 4: The Seam Model

Page 29, "It seems that the only ways to end up with an easily testable program are to write tests as you develop it or spend a bit of time trying to 'design for testability.'"


Page 30, What are the different ways we think about our code as we are in different contexts?

E.g. In school, Feathers viewed his program as a listing, a think to be understood piece by piece, top to bottom, without modularity in mind.

Another way to ask this is, how do we relate to the software we write in different contexts? Contexts like:

  • A personal project that only we maintain
  • The application code at a long-term, 9-5 job
  • The codebase for a short-term consulting contract

Page 30, Some views on reuse, feels like he could easily be talking about React code.

"Reuse is tough. Even when pieces of software look independent, they often depend upon each other in subtle ways."


Seams...

I've always thought of seams as the edges of a piece of code, the places where you can seal the air-tight doors cutting off the rest of the ship so that you can float around this area in isolation.

Page 31, "A seam is a place where you can alter behavior in your program without editing in that place."

This definition feels like more to chew on. I think that's because this is a more expansive definition. The approach that fits my understanding is to alter the behavior by making it a no-op, but there are other ways to alter the behavior at a seam, depending on your needs.


The types of seams that are available to you (e.g. Object Seam) depend on the programming language you are using -- is there a preprocessing step? Compilation? Linking? Is it an interpreted language (like Ruby)? Meta-programming capabilities?


Seams are a great way to do separation (see Ch. 3). It can also be used for sensing though.

A test fixture tool like VCR is an example of doing both. It primarily separates you from some 3rd-party API dependency. It also returns a recorded and saved response to the calling code which can help with sensing.


Seams allow us to get some initial tests in place -- acting as a software vise to secure our code -- so that we can move on to more aggressive changes.


When you start to view code as having seams, you begin to see opportunities for writing more testable code. You design your code in a way that exposes easy to use seams. This is easiest to do if you are actively trying to use those seams as part of tests you use to develop the code.

Chapter 5: Tools

  • IDE
  • Test Suite
  • Language-specific Refactoring Tools (e.g. codemods?)

Smalltalk has a refactoring tool called the Refactoring Browser which supports a ton of automated refactorings.


Martin Fowler's Refactoring Definition:

"A change made to the internal structure of software to make it easier to understand and cheaper to modify without changing its existing behavior."

Thoughts on well-scoped commits: A commit should either refactor some code (preserving its behavior) or it should alter the code's behavior. It shouldn't do both. Problems arise when a commit tries to do both.


Are there IDEs with good automated refactoring support for JavaScript? E.g. That can handle method extraction well.

In terms of automated refactoring tooling for JavaScript, there are codemods (jscodeshift). What else is there?


Page 48, Feathers makes a case against UI-based testing tools: they are often expensive (💰), it's a lot of work to set up the tests, and it's often testing too far from the functionality, so it's hard to track down failures.


Page 48, Important features of early, free popular unit testing frameworks (e.g. xUnit):

  • Write tests in the same programming language
  • Tests run in isolation
  • Test can be grouped into suites and run on demand

This is a living document of my notes as I read through Working Effectively with Legacy Code. I'm writing this mostly for myself, but if you want to interact to any of my notes or questions, find me on twitter.

Top comments (0)