Cover image for Add unit tests on a project already in progress

Add unit tests on a project already in progress

khalyomede profile image Khalyomede ・2 min read

My experience

I am currently in a project and we did not use unit tests. Here we are, one year after, and as the issues were more and more about stuff we were confident with, I felt it was the right time to propose unit tests.

Did you found yourself in my experience? If yes, I give you below my advice to kindly start using unit tests.

One fix = one unit test

Your code base is already big. You do not have the human bandwidth to parse all your methods and create the unit tests.

However, you could start by creating one unit tests for each new issue you fix.

// Fixes issue #31
it('should not allow planning a publishing date in the past', function() {
  // ...

Benefits of progressive unit tests

  • they make you more confident when you create your pull requests
  • they forces you to think out of the box by stating what is the expected result of your fix
  • they make you earn some time when you search which edge case you did not thought of if the issue comes back
  • PR reviewers quickly understand which case you are solving

Not convinced yet?

Pavol also wrote about the benefits of unit testing a project in progress in this article. Take a look at it if you need more points of view.


Coding can be frustrating, or even worse, stressing, in a production environement. Deadlines can make us take shortcuts and avoid simple or obvious coding principles, leading to avoidable issues.

I think unit tests are the right tool to help us make our job more enjoyable, by making sure we are building some code on solid basis.

I hope this post made you want to start trying unit testing your code if you did not already!

Photo by Startup Stock Photos from Pexels

Posted on by:

khalyomede profile



Fullstack developer @ Carlili


markdown guide

The problem is that project is likely not well equipped to handle unit tests after the fact. The unit tests will likely have lots of dependencies and run slowly. The only remedy is to refactor the code base to be easier to test. However, the refactoring of code without unit tests is extremely risky. This is a vicious circle.


The problem is that project is likely not well equipped to handle unit tests after the factor.

A Unit test is for our own benefit and not the opposite. We don't program for the good of test (unless the project is TDD), we program as usual and Unit Test should adapt to it.

Since I program OOP (model class and service class), then usually its simple, we test the service class (only if it is possible).


That is why you use integration tests on the old code and unit tests on the new one you add or refactor because, as you said, unit tests on legacy unmaintainable code is madness.


Right, but that involves writing the integration test (assuming one doesn't exist). This gives the developer three options:

1. Write the Integration Test to refactor the code to write the unit test.
The "best" option but most time consuming.

2. Write the Integration Test instead of the unit test.
The code is at least tested, but the test is brittle and slow.

3. Fix the bug and skip the unit test (just this one time!)
Unfortunately probably the most common. A Developer will likely rationalize that this small bug is not important enough to do all the work in option 1.


You right, when the code base is not sane, unit tests will not add any value to the project.


What is the significance of unit testing or automating it if someone has a fully automated functional test suite with 70%-80% test coverage (we keep adding new cases with the growing feature set, automate defect fixes as well) and 60%-70% code coverage? Would unit testing make any difference in such a situation? The product is a software development kit (SDK) and almost 20+ year old but still flourishing. And if yes, then how to convince the Product owners to allow for it.


Code coverage will tell you how much you covered your methods/function, but it does not mean how "well" you tested. To answer your first question, it does not mean very much for me alone. 70% is still a very good sign.

When I have this same conversation we have, but with my collegues, I always answer: unit testing methods or functions alone is not enough. This reminds me of another Dev.to post in which the author raise an interesting point in the sense that unit test will make it hard to reproduce a "real life" context, because you are monkey patching all around but not taking real scenarios that could potentially impact your methods or function. So the answer to your second question would be yes, but not only.

The best thing to do for a 20+ year old product, is to monitor. Monitor everything. But only if the time allow you to. If it has not been planned in your priorities, do not take the risk to loose some time for this.
Monitoring will make the moment you are trying to convince your lead dev/PO that unit testing is healthy easier.
To do so, install the proper library, and go piece by piece. Fixing issues has been proven to be an easy and gradually step for me to introduce unit tests. And check the code coverage before and after a PR. You will find some great improvements only with unit tests.
Then, within a month or so, your lead dev would already notice your are including those tests and will ask you why. Then pop up those statistics, the time you took, and the time you will not spend if the fixed case comes again.
Finally, you can say that you should start being confident over what you delivrer, and unit tests are perfect for this task.

Hope it helps :)


Code coverage will tell you how much you covered your methods/function, but it does not mean how "well" you tested. To answer your first question, it does not mean very much for me alone. 70% is still a very good sign.

I agree I have seen so many tests that simply cheats, they are testing nothing really but trivialities.


Thanks for replying with such great details and clarity. I do agree to work on unit tests piece by piece, improve, and demonstrate the results to other stakeholders to get them convinced. Your ideas are indeed constructive and helpful.

I see many make the mistake of asking managers and product owners permission to do things they deem neccassary for the health and stability of the project. I never ask if I can write tests or do TDD, its just part of my estimate of entire task. Just like a doctor won't ask you, "Do you mind if I wash my hands before the surgery? Its going to take an extra 15 minutes."

One thing to keep in mind when we talk "unit" tests. There is wide range of ideas of what "unit" means. Really when we talk about testing, we want to have a suit of tests that I can run on my local machine that at most will take a minute or less to run. I use TDD and this is important because of the red, green , refactor loop. If the tests for the part of the code I work on take longer then a few seconds, that slows me down. I need quick feedback loops so I can make progress. The longer the tests take to run, the longer my red, green, refactor loops get, the less work I can get done. Or worse, people stop running the tests.

Your automated suite of functional tests may be fitting these needs if any developer can run them on any machine at any time and at most takes a couple minutes to run all the tests. If this is not the case then you may want to look into using "unit" tests.

This is really good point Michael! Testing and documentation are both crucial parts of software development, not just something you add if there's time. Unfortunately many managers only measure things in short-term and either don't care or don't know how to measure the long tail of work that bad code and documentation cause.

Sometimes lack of tests makes it super slow to add new code because you're worried it will break things. Sometimes lack of documentation wastes days of developer time that could have been avoided by minutes or hours of work up ahead.

Speaking the truth here, I cannot count how many example I have of untested code that produce side effects over time...


I agree if possible it is a good idea to add tests to legacy code. As part of one of my yearly goals I am trying to write tests for any new non trivial code, but I want to write tests for as much old code as possible too. A lot of code at my company simply isn't testable and needs refactoring before it can be fully tested. But the day will come when it is all at least good enough to test :)


I highly recommend reading Michael Feather’s Working Effectively with Legacy Code. It defines “legacy code” as code without test, and teaches how to go about modifying a 10,000 line JSP code that you inherited - safely!

Anybody who’s inherited a 10yr old project would definitely find that book indispensable!


Thank you for sharing! If folks are not into JSP, does this book still remain instructive?


Yes, the book is about the general concepts, not a specific language.

It discusses several languages, in passing (C#, C++, Java, HTML & JavaScript, and others).


Yes! It gives so many pratical suggestions and range of options depending on how much time you have. This is a must read for anyone that needs to maintain a legacy system.


Adding unit tests to an existing legacy code base is difficult.

Adding unit tests to new code being added to an existing legacy code base as you go, that's achievable. And laudable!

CAUTION: I strongly caution against throwing out the old legacy code base and starting over greenfield. Despite how appealing that may seem, it's highly risky. Our industry has few examples of success, and many examples of failure doing that.

In my previous project, when we ran into a bug we'd write a bug regression test that exercised the code so as to reproduce the bug. Usually that test was written after the problem was discovered, but often (but not always) before the fix was made.

The bug regression tests, user acceptance tests, integration tests, and system tests were run as a big batch process farmed out over many machines. If run serially, they'd take about 660 hours to run, but since they were farmed out they ran in about 6 hours.

We also had about 70% code coverage for unit tests. (If excluding all the UI code -- since that code isn't amenable to unit testing -- then the code coverage was about 95%, I'd guestimate.) The unit tests took about 10 seconds to run, including spin up time. We were diligent to make sure no integration tests snuck into our unit tests, since that would devalue the unit test suite and likely make the unit tests take much longer than desired.

Code that was written with unit tests almost invariably had properties such as actually being unit testable, actually being tested, often adhered diligently to SOLID principles, adhered to DRY for the code and WET for the tests, often had referential transparency (i.e., no global dependencies), much more robust than code without (because the unit tests provided a guarantee of basic correctness), high level of maintainability, much more malleable, used inversion-of-control pattern extensively (because that's necessary to make things unit testable with mocks or fakes), and was refactor-able with confidence.

Code that was not written with unit tests (e.g., the entirety of the prior code base, two projects back) was typically highly intractable to adding unit tests, and lacked the above properties. Adding unit tests to code designed without unit testing in mind was poor ROI because the code was often highly entangled, tightly coupled, low cohesion, poor encapsulation, dependent on global state, and overly mutable.

Also, unit tests are only run in debug. The other tests are only run in release.

The QEs write the user acceptance tests, integration tests, and system tests.

The developers write the unit tests. And usually also wrote the bug regression tests, since although the QE often discovered the problem or worked with the customer who filed an incident and reproduced the problem, the real cause of the problem was not always obvious by the repro steps.


Thank you so much for sharing your experience! Very instructive, I am getting this testimonial in mind when pursuing my unit test journey.

I absolutely agree with you on the part when you say that poor code, with low to no code principles, even for new projects, will not benefit from unit tests. Very true.


Awesome piece and I totally agree! At work we're dealing with this exact scenario (old code that has no tests and has proven to be consistently faulty); I'm working towards building a stronger testing culture for our team overall but it definitely takes time.


My favorite reason to write unit or integration tests for bug fixes is that it forces you to understand, triangulate and formulate the problem. I have had to work with so many bug reports that are vague: it doesn't specify how to replicate the issue and what's the expected behavior.

When you write a test before fixing it, you can confirm that you have found the bug (test fails), that you have fixed it (test passes) and there's something for peer review to look into to better understand what happens and if there are some edge cases that you didn't think about.

A great book on the topic, especially with legacy code is Michael Feathers's Working Effectively with Legacy Code. I've learned so many good tricks and habits from that book that have helped me tackle some bloated 15+ years old codebases. edit Ah, just noticed Franz had already recommended the book below.


Thank you very much, your feedback is gold! I definitively will purchase this book, seems a must have according to your experiences folks :)