This article was original posted on calhoun.io, where I write about Go, web dev, testing, and more.
This will be the second time I've written about an unpopular opinion I have about testing, and I'm sure it will ruffle a few feathers. I don't always practice Test Driven Development (also known as TDD), and I believe there are many situations where practicing TDD is more of a hinderance than a boon.
I'm going to explore why I don't always practice TDD in a moment, but I want to be clear beforehand that this is not a TDD bashing article. In reality, I think learning about and trying TDD is incredibly beneficial for developers. TDD forces developers to think about development and design from another perspective; rather than focusing on, "How am I going to implement this?" they instead need to step back and think, "How would I use this function?" While this change may seem minor, it can result in drastic improvements to a codebase.
So why am I writing this article?
I am writing this article because there are many developers out there who struggle with TDD and feel poorly as a result. Whether it is imposter syndrome or just feeling like they have a dirty secret they need to hide, those feelings exist in far more developers than most people realize because everyone is too worried to admit that they just can't make TDD work for them. This is made even worse when a developer hears others talking about how TDD is this amazing thing that made them so productive. In short, there are many coders out there who feel like crap and they shouldn't; TDD is a great learning tool but I can say firsthand that it isn't always effective, and in many cases it just hinders my ability to produce high quality code quickly.
Alright, now let's dig into why I don't practice TDD all the time.
TDD focuses too much on unit testing
Test driven development is often taught as a process with three steps: Red, Green, Refactor.
The idea here is pretty simple:
- The Red step is where you write a test that will fail because it tests something you haven't implemented yet.
- The Green step involves writing code to make that test pass
- The Refactor step is where you restructuring your code into something more maintainable while using your tests to ensure you don't break anything. You DO NOT add any new functionality during this step.
I tried to follow TDD pretty strictly for a bit, but at times it just felt like a massive pain; it kept slowing me down in every once in a while rather than helping me be more productive.
I couldn't really put my finger on exactly why that was the case until I read this twitter thread by Jeffrey Way:
I only include the first tweet here, but you should go read the entire thread. It is well worth your time.
Jeffrey puts into words what I have been struggling with for quite some time; TDD is hard to learn and grasp when we focus so much on unit tests and completely ignore the more complex scenarios that every developer is bound to run into sooner or later.
In nearly every case where I find using TDD troublesome, it almost always stems from me trying to unit test some code where I need to mock out everything under the sun.
TDD is supposed to help us take a step back from the implementation and instead focus on how the code might be used, but when I am writing unit tests where a bunch of dependencies need mocked out that is no longer true. I am forced to once again start thinking about implementation details like, "Will my code need access to a database?" and "What about encoding? Do we need to support multiple output formats? Should I inject a mock encoder for the test?"
It just doesn't feel natural to me to think about code this way. It isn't beneficial to start thinking about what dependencies I'll need before I even start to use them. Instead, I work better when I take a step back and write a feature test. Something like, "When I POST
to the /orders
path with the JSON {"price": 123, ...}
I expect to get the following JSON back with an ID that matches the ord_[a-zA-Z0-9]{8,}$
pattern."
Not only is this type of test incredibly easy to write - we just spin up an app, hit the endpoint, and check the results - but it is also gets me back into the correct mindset. I'm now thinking about how someone might use actually use the code; I'm thinking about another developer interacting with my API, or a real person filling out a form and submitting it.
There are obviously exceptions to this. For instance, if I'm writing a function to factor numbers TDD could lead to a reasonable solution to the problem. The key components here is that we aren't really focusing on mocks and dependencies; we are instead writing an isolated function and TDD can shine in situations like these assuming we don't fall into the second trap of thinking we have to write the absolute minimum amount of code at all times.
I'm also not saying you shouldn't ever write unit tests where you mock things. These tests can provide value in many environments. I just don't find myself practicing TDD as often when writing my unit tests. Instead, I unit test my feature tests and then write unit tests as I see fit.
Not all code should be written one test case at a time.
TDD is often taught using the following rules:
- You are not allowed to write any production code unless it is to make a failing unit test pass.
- You are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures.
- You are not allowed to write any more production code than is sufficient to pass the one failing unit test.
These rules were authored by Robert C. Martin (Uncle Bob) and can be found on the website: butunclebob.com
While this can work in many situations, I wholly disagree with the idea that all code should be written this way.
What I expect to happen here is for someone to link me a video, a blog, or some other example where the developer uses TDD to derive some complicated algorithm. One popular example is writing a function to determine the factors of a number. I've also seen articles where the author explores whether it is possible to derive something like quicksort via TDD.
Surely if these more complicated algorithms can be derived through TDD then it must work in all cases, right?
While TDD can be used at times to derive a reasonable algorithm, I have also seen countless instances where it has worked in the exact opposite way. By using TDD the developer derived an algorithm that was incredibly slow, inefficient, and sometimes even missed edge cases because it is nearly impossible to think of every edge case, let alone test them all.
The truth is, there are situations where you actually need to sit down and think about a little more than just expected inputs and outputs for a function. The easiest example to grasp is probably a sorting algorithm - while you might derive quicksort from a TDD approach, you could just as easily derive an algorithm that is far slower and less efficient. Even if you did come up with an efficient quicksort, most standard libraries use something a little more complex than this because it isn't as efficient to use quicksort with smaller sized lists.
Going beyond algorithms, there are also times where constantly context switching just hurts overall productivity. While it might work well for many developers to constantly write one test, then one or two lines of production code, test again, refactor, then repeat, I personally find this constant context switching a distraction. I find that I am far more effective when I :
- Write a few test cases demonstrating the basic functionality I expect.
- Spend time to think about how I might achieve that functionality.
- Implement a rough version that gets my tests passing.
- Refactor as necessary.
This is pretty similar to TDD, but it isn't quite the same and I'm sure if I taught it as my version of TDD many would tell me I'm "doing it wrong". 🤷♂️
Wrapping up
I find TDD to be beneficial at times and I'm not saying we should abandon it. Instead, what I am trying to convey is that getting caught up in this strict set of rules defining what is and isn't TDD is a mistake.
We should instead take the lessons we can learn from TDD and apply them in the way that is most effective for ourselves. If that means we end up breaking a few of the rules, so be it. After all, the goal of TDD, agile, or really any development process is to make us better at our job, and if they aren't doing that then something needs to change.
Want to learn Go?
Interested in learning or practicing Go? Check out my FREE course - Gophercises - Programming Exercises for Budding Gophers.
I also have some premium courses that cover Web Dev with Go and Testing with Go that you can check out as well.
Top comments (25)
Thank you! I'm SO glad to see others say that TDD isn't for them. Unit testing was hammered into me throughout university and I always felt like less of a developer for not being able to follow it. Now that I'm reading articles like yours, I feel a lot better knowing that it doesn't have to work for everyone (and that I'm not necessarily a bad dev).
Fair enough, as long as you end up at the same spot- resonably bullet-proof code that’s easily debug-able.
I have spent the majority of my career as a “code janitor“ - retroactively adding tests to legacy code that was not built with TDD in mind. In my experience, this is an expensive, slow, short-sighted way to develop. If RGR isn’t for you, that’s totally fine! However, I do agree with Uncle Bob that your tests should cover happy/sad paths.
As the Ba’hai would say - “many lanterns - one light”. As long as we’re all moving towards the same light, I say use the lantern you’re with which you’re comfortable.
Does this apply to both backend and front-end development in your experience?
I would say it does, yes, if for no better reason then it provides documentation for the code which you have written.
That said, I agree with John’s sentiment that TDD can be sometimes be done simply to “make the numbers”. Tests should serve the code base and the developers, not the other way around.
For me, testing is about codifying a behaviour. Sorting is a great example: does my sorting function sort this list of numbers the way I expect? Nice black box testing would not care about how that algorithm is implemented; we only care about behaviours. Does my sorting function sort quickly for large sets of numbers? Does it perform well for a variety of distributions? What about degenerate cases. I suppose we could imagine a series of tests that would force us to write quicksort rather than mergesort or shellsort but... meh.
I've definitely seen that too. I mean, what can I say? TDD will not make you smart. TDD is not a panacea that can replace an understanding of algorithmic complexity. You can stick a test in that requires you to make the sort happen faster than some number, but there is no way TDD is magically going to show you how to make it pass. TDD is a tool to help you think, it's not a substitute for thinking.
Are you writing a rough version that makes all the tests pass the first time (which I don't think I could do), or are you iterating by making the tests pass one after another with a refactor after each? In either case, sounds enough like TDD to me - testing first, using tests to think about code.
I think TDD is a really good practice for beginners because 1) It gives you a better understanding of what to test and how to write those tests and 2) It forces you to learn how to break down large complex problems into smaller more manageable "blocks" of logic. #2 is more important.
Once you get good at those two things, I think TDD is just another tool in your toolbox, sometimes it makes sense, sometimes it doesn't. Sometimes TDD helps a lot by forcing you to write clean code and focus on one small problem at a time, and sometimes it's better to just get lost in the code for an hour or two to solve a problem without worrying about writing tests every five minutes.
Personally my goto code "process" looks like this:
I find this process helps me a lot by allowing me to focus on different activities (which usually require a completely different mindset) for larger chunks of time, rather than shifting constantly between testing and programming. Instead I start by defining the scope of the problem I'm trying to solve (feature tests), then focus on programming and problem solving to pass the feature tests, then shift gears to unit testing and refactoring the code to make it cleaner, and finally focus on the documentation and readability of all the code I wrote. Then I write new feature tests for a different problem scope and rinse and repeat.
I have a quite good experience with TDD. Actually it helped me save tons of time. I love it mostly when I do fixes (I know exactly what should work, so it is easier to begin with tests, unit or feature).
However, I prefer API first for my new projects: I know I want to use my tool in a certain manner, with a specific syntax, chaining my method in a particular way. So I begin to write my methods, then I comment with the algorithm, and I go deeper and deeper until I need elementary methods (like checking if a key exists,...). Once I reach this point, I start unit testing those methods, and I continue...
So I agree with you, TDD is not the answer, and we teach it in a way that it prevents thinking the big picture.
Same.
I have found it extremely useful during software maintenance and evolution. I put an example:
I had to improve one part of one complex report engine, in order to support some formatting. The component I did need to improve had interactions with not so many different components, just with the one who use it. The problem is that there were too many possible inputs and I could not analyze the whole engine to understand who it works.
They had a big set of business tests, that tested the possible workflows of the whole system using BDD tests. Hundreds of reports were tested for every known use case (crm, purchase, sales, inventory, production, etc..)
What I have done is to run the complete set of tests with a modified version of the original component that did save the method calls and the responses.
Then I had hundred of unit tests for my new component, that I did run on every change. I did safe even resources and time, because now I didn't have to run the whole system in order to test my component.
After few days my replacement component was ready, with all the known cases covered, and the new ones required.
The replacement was successful, IMHO.
And I did save time to know how the whole report engine work.
That did demonstrate me the useful the workflow (bdd) tests were to generate unit test contexts.
What turned me off TDD was how every introduction I read always started with something like "what happens if we pass a string when we're expecting a number" - I'm using a language with a static type system, that isn't going to happen (well I guess someone could do something perverse with reflection, but that seems like a bit of a stretch). Then there would be "what happens if there's an unexpected null" - I'm using static analysis tools to stop that happening in the first place.
Then once you got past the tests that are better left to better tooling, most of the tests I was being encouraged to write seemed to be doing little other than testing the JVM (which I think is a side effect of "only writing enough code to pass the test").
I am a bigger fan of fault injection testing. It is a lot more complex, but the payoff is way bigger when you learn to develop reliable and redundant infrastructure that responds predictably to system faults.
I also dislike TDD in many scenarios. I've seen the same problems you've described. There are cases where I use it, but more often, I write the code, write the tests to cover the code's features, and work from there.
TDD is about building thing that do something! It isn't just documenting your code. It should provide some useful feature. Like preventing errors where several people have their hands in the code. Or even provide a pop-up, UI tutorial on how the API under your application works. And possibly even generate some type of state (a database, config, or .json files) that customizes how your application will work. Otherwise, it is just boiler plate code no one will read.
I also hate the unit-test-only attitude some have. To test your application for reasons of code coverage alone is stupid. Same as: look how many lines of code I wrote today!
I agree in some way with this. I've seen teams trying to enforce it for everyone, and I think it's a mistake, I felt bad for having a hard time learning it.
More specifically, for front end developers, when I am building a new component/page, I prefer to write my HTML first, make it semantic, accessible, style it, and then add logic needed for the interactivity. I find TDD to not always work in that case, as I find myself focusing more on implementation details than the actual feature I am trying to implement.
But when rewriting a whole page in another system though, I find it really useful as I have a pretty good idea already on how it's going to look like and can make sure I cover all the cases the code being rewritten was doing.
I think in the end, I do my own version of TDD rather than following it to the letter. As long I am still confident my code is tested, I think this is fine, and this is what I try to show more junior co-workers as well, so they find what works for them in a more natural way.