DEV Community

Beekey Cheung
Beekey Cheung

Posted on • Originally published at blog.professorbeekums.com on

Backfilling Tests

Automated testing is a wonderful thing. Think about it. Why spend hours, or even a few minutes, doing something that takes a computer less than a second. I’ve never regretted writing tests, especially after seeing the time to debug and fix bugs in a subsystem go from hours to minutes. Bug report -> investigation -> code fix -> test -> release, can all be done in under 15 minutes when automated tests are around.

The problem with automated tests is taking the initial effort into writing them. I’ve never regretted writing tests after the fact, but I’ve also talked myself out of writing them ahead of time knowing full well how useful they are. There are a variety of reasons I and many developers like me do this. Regardless of what those reasons are, the damage is done. A software system is lacking in tests. What can be done to resolve that issue? How do we start testing a system that has little to no tests?

The important thing to keep in mind is that automated testing isn’t an all or nothing affair. You do not need to stop all work and do nothing but write tests for 6 months. Not only is it hard to justify not building features for that long, but it can get tedious. Tests are great, but they are often monotonous to write. There also isn’t as large a feeling of accomplishment as building a new feature.

That feeling of accomplishment is the first thing that needs to be tackled. Developers need to feel that the code they write matters. It needs to be useful and used. That can be achieved by first tackling the most time consuming manual tests. Is there a feature no one wants to work on because every time someone touches it, they have to spend days testing that feature? Spend those days writing automated tests instead. The high return on investment here makes it easy to justify spending the time to build the tests. The relief of not having to run through those manual test cases again provides the sense of accomplishment.

I’m pretty sure I’m not the only one who has in the past started writing scripts to aid in time consuming manual tests. If you have a script to aid in manual testing, you have a script that helps set up a test case. Just take it another step farther to fully automate it.

Another great tactic is to start writing tests for any new bug fix. A problem with starting to write automated tests for an existing system is trying to figure out where to start. Some test cases are likely to cause bugs and some aren’t. A real bug answers that question. If a bug exists, then obviously there is a 100% chance that that bug has occurred! The value in an automated test around that bug is clear.

Some developers may think that if they fix a bug, it is unlikely to happen again so maybe we shouldn’t write a test for it. My counter argument is that every developer has had this reaction:

“Ugh! That bug again? I thought we fixed it last month.”

This is a drip by drip approach to backfilling automated tests. It’s convenient because you can take a relaxed pace to writing your tests and it makes it easier to get started without seeming overwhelming. Unfortunately, the big problem with this approach is that with poor test coverage, there is also little confidence in code changes working just because the tests pass. More importantly, a small number of tests are not likely to catch a bug before a release. This reduces the perceived value of those tests and can result in going back to never writing any tests.

Mitigating this issue is possible though. For a large software system, focus test writing on certain subsystems or features. Instead of writing a single test each for 5 features, write a dozen tests for a single feature. This way you can at least be confident that your tests will catch any issues to code changes around that single feature. This also is now a foundation for building more tests in other features.

Backfilling tests can definitely seem overwhelming, but that’s only if you convince yourself that you have to write tests for everything all at once. It becomes a lot easier when you break the task into more manageable chunks and spread them out over time. It’s still a lot of work, but you’ll never regret doing it.

Top comments (3)

Collapse
 
ben profile image
Ben Halpern

Even though test coverage metrics are fallible, I found having the scoreboard can sort of gamify the process and be really motivating. We went from bad to good pretty quickly with the fun of seeing the test coverage metric rise to 50%, 60%, 70%, 80%, etc.

Collapse
 
pbeekums profile image
Beekey Cheung

Yeah I get how that motivation is important. How do you mitigate the fallacies? One of my biggest concerns is the false sense of security in covering code, but not making the assertions necessary to truly validate it.

Collapse
 
ben profile image
Ben Halpern

I think it's a subtle culture thing. We really addressed this early and often. We didn't try to put pressure on the number itself, we mostly let the motivation happen on its own. Code reviews really don't care about the optimization, and we tried to proceed with self-awareness at every turn.

It's the little things. It's easy to proceed with blind optimism or ignorance.