DEV Community

Cover image for Why I Don't do TDD
Shai Almog
Shai Almog

Posted on • Originally published at debugagent.com

Why I Don't do TDD

I recently gave a talk about debugging for the London Java Community. During the Q&A part of the talk, someone asked me about my approach to Test Driven Development. In the past I looked at that practice in a more positive light. Writing lots of tests. How can that be bad?

But as time moved on, I see it in a different light. I see it as a very limited tool that has very specific use cases. It doesn’t fit into the type of projects I build and often hinders the fluid processes it’s supposed to promote. But let’s backtrack for a second. I really liked this post that separates the types and problems in TDD. But let’s simplify it a bit, let’s clarify that every PR should have good coverage. This isn’t TDD. It’s just good programming.

TDD is more than that. In it we need to define the constraints and then solve the problem. Is that approach superior to solving the problem and then verifying the constraints are correct? That’s the core premise of TDD vs. just writing good test coverage.

The Good

TDD is an interesting approach. It’s especially useful when working with loosely typed languages. In those situations TDD is wonderful as it fills the role of a strict compiler and linter.

There are other cases where it makes sense. When we’re building a system that has very well defined input and output. I’ve run into a lot of these cases when building courses and materials. When working on real-world data this sometimes happens when we have middleware that processes data and outputs it in a predefined format.

The idea is to construct the equation with the hidden variables in the middle. Then the coding becomes filling in the equation. It’s very convenient in cases like that. Coding becomes filling in the blanks.

The Bad

“Test Driven Development IS Double Entry Bookkeeping. Same discipline. Same reasoning. Same result.” – Uncle Bob Martin

I would argue that Testing is a bit like double entry bookkeeping. Yes. We should have testing. The question is should we build our code based on our tests or vice versa? Here the answer isn’t so simple.

If we have a pre-existing system with tests, then TDD makes all the sense in the world. But testing a system that wasn’t built yet. There are some cases where it makes sense, but not as often as one would think.

The big claim for TDD is “its design”. Tests are effectively the system design, and we then implement that design. The problem with this is that we can’t debug a design either. In the past I worked on a project for a major Japanese company. This company had one of the largest, most detailed sets of annex design books. Based on these design specifications the company built thousands of tests. We were supposed to pass a huge amount of tests with our system. Notice that most weren’t even automatic.

The tests had bugs. There were many competing implementations but none of them found the bugs in the tests. Why? They all used the same reference implementation source code. We were the first team to skip that and do a cleanroom implementation. It perpetuated these bugs in the code, some of them were serious performance bugs that affected all previous releases.

But the real problem was the slow progress. The company could not move forward quickly. TDD proponents will be quick to comment that a TDD project is easier to refactor since the tests give us a guarantee that we won’t have regressions. But this applies to projects with testing performed after the fact.

The Worse

TDD focuses heavily on fast unit testing. It’s impractical to run slow integration tests or longrun tests that can run overnight on a TDD system. How do you verify scale and integration into a major system?

In an ideal world everything will just click into place like legos. I don’t live in such a world, Integration tests fail badly. These are the worst failures with the hardest to track bugs. I’d much rather have a failure in the unit tests, that’s why I have them. They are easy to fix. But even with perfect coverage they don’t test the interconnect properly. We need integration tests and they find the most terrible bugs.

As a result, TDD over-emphasizes the “nice to have” unit tests, over the essential integration tests. Yes, you should have both. But I must have the integration tests. Those don’t fit as cleanly into the TDD process.

Right Driven Testing

I write testing the way I choose on a case-by-case basis. If I have a case where testing in advance is natural, I’ll use that. But for most cases, writing the code first seems more natural to me. Reviewing the coverage numbers is very helpful when writing tests and this is something I do after the fact.

As I mentioned before, I only check coverage for integration tests. I like unit tests and monitor the coverage there since I want good coverage there too. But for quality, only integration tests matter. A PR needs unit tests, I don’t care if we wrote them before the implementation. We should judge the results.

Bad Automation

When Tesla was building up their Model 3 factories they went into production hell. The source of the problems was their attempt to automate everything. The Pareto Principle applies perfectly to automation. Some things are just very resistant to automation and make the entire process so much worse.

One point where this really fails is in UI testing. Solutions like Selenium, etc. made huge strides in testing web front ends. Still, the complexity is tremendous and the tests are very fragile. We end up with hard to maintain tests. Worse, we find the UI harder to refactor because we don’t want to rewrite the tests.

We can probably cross 80% of tested functionality, but there’s a point of diminishing return for automation. In those environments TDD is problematic. The functionality is easy but building the tests becomes untenable.

Finally

I’m not against TDD but I don’t recommend it and effectively I don’t use it. When it makes sense to start with a test I might do that, but that’s not really TDD. I judge code based on the results. TDD can provide great results but often it over-emphasizes unit tests. Integration tests are more important for quality in the long run.

Automation is great. Until it stops. There’s a point where automated tests just make little sense. It would save us a lot of time and effort to accept that and focus our efforts in a productive direction.

This is from my bias as a Java developer who likes type-safe, strict languages. Languages such as JavaScript and Python can benefit from a larger volume of tests because of their flexibility. Hence TDD makes more sense in those environments.

In summary, testing is good. TDD doesn’t make better tests though. It’s an interesting approach if it works for you. For some cases it’s huge. But the idea that TDD is essential or even that it will significantly improve the resulting code, doesn’t make sense.

Oldest comments (26)

Collapse
 
mcsee profile image
Maxi Contieri

Nice article !

Collapse
 
alvarolorentedev profile image
Alvaro • Edited

Nice article.
nevertheless, I always get to the conclusion people forget that TDD is a tool for minimalistic code design, which is great for extreme programmers.
It's a very good tool to keep your feet down to earth, a problem that appears as experience increases, because it follows the principle of YAGNI, as most of the software functionalities never get extended and design and patterns become only accidental complexity.

Collapse
 
codenameone profile image
Shai Almog

Methodologies are great until people start treating them as a religion at which point they often become a hindrance. I think there's no "one true way", to implement software correctly.

Collapse
 
jmfayard profile image
Jean-Michel 🕵🏻‍♂️ Fayard • Edited

I agree with your description of the use cases and no-use cases

Generally I think the world would be better off if we saw Test Driven Development as a tool in our toolbox rather than something you SHOULD do if you want to be a REAL programmer.

Also true for many other things as I said in this old article of mine

42 things you MUST stop obsessing about if you want to become a good $PERSON - DEV Community 👩‍💻👨‍💻

Collapse
 
nicolus profile image
Nicolas Bailly

Thing is everyone wants a recipe for how to be a good programmer, and it's much more alluring to think "If I do TDD religiously I'll be a good programmer" than "If I get years if experience working on real products with various tools I'll be a good programmer". Plus when you're in the TDD circle you get that warm fuzzy feeling of being able to talk down to anyone who's not doing TDD.

Collapse
 
jmfayard profile image
Jean-Michel 🕵🏻‍♂️ Fayard

True

Collapse
 
miketalbot profile image
Mike Talbot ⭐

Very good points. For me, I tend to TDD when I'm building an API as it helps as a way of testing it - eg. its the fastest way of building it. If/when mocking starts becoming ridiculous or contrived then I stop.

Collapse
 
davelapchuk profile image
Dave Lapchuk

APIs are also versioned entities where you need to ensure the behaviour of old versions never changes between releases.

Collapse
 
dendihandian profile image
Dendi Handian • Edited

Still stick to Test-After Development, because I still need to check breaking changes everywhere when updating core and packages. TDD is more like culture.

Collapse
 
jtlapp profile image
Joe Lapp

Thanks for saying this out loud. TDD makes sense when waterfall makes sense, which is when you have clear requirements in advance of development. It doesn't make sense when requirements and solution are evolving, as coding to the tests slows me down and holds me back from finding the right solution. I find TDD helpful once requirements are clear for a module, later in development. Given agile's prevalence, it seems to me that TDD ought to be rare at the start of a project.

Collapse
 
phlash profile image
Phil Ashby

Interesting - as my view of TDD (and it's cousin BDD) is that the value is higher for poorly specified systems, as the up-front test design forces early consideration of the specification, and likely reduction of scope and effort to get something in front of it's consumers where feedback is obtained earlier.

My general view:

  • TDD/BDD or any creating tests first strategy applies an 'outside in' principle to designing a system, by asking 'how do we test that?': from general behavioural traits ('what does it do?') down through architectural decisions ('how does it do that?') to coupling considerations ('what does this API look like?') and internal detail ('how does that module work?'). At each stage the human preference to minimise the number of tests keeps the focus on core things, reducing accidental complexity issues.
  • Post implementation test strategies apply a waterfall-like 'inside out' principle, where (unit) tests are created for details as they implemented, then integrations (coupling), then behaviour - which all assumes an accurate a-priori specification. If the specification is not accurate (in my experience this is always!) both the implementation and test creation work is wasted effort, especialy when the last set of tests discover that the system doesn't do what the consumer(s) really wanted (or they have changed their mind by then).

As ever - it depends 😁 and YMMV.

Collapse
 
jtlapp profile image
Joe Lapp

I'm not smart enough to figure everything out before I start coding. Even when I do think I have everything figured it, it turns out I'm often wrong and end up having to revise the API. For simple things, yes I could do this, but for novel algorithms and architectures, it's beyond me. I know, I've tried.

Thread Thread
 
phlash profile image
Phil Ashby

This is kind of my point too - in an evolving requirements world, asking 'how do I test it?' before coding up something untestable helps clarify the requirements as they exist now for the behaviour you are interested in. In your earlier comment you say:

coding to the tests slows me down and holds me back from finding the right solution

I would ask, how do you know when you arrive at the right solution? You must be testing your work, thus you have designed a test before you commit to a solution.. you are thus doing some TDD, but perhaps not starting at the system level and asking: what needs doing first and how do we know we're getting that right? 😁

Thread Thread
 
jtlapp profile image
Joe Lapp

Yes, I do test, once I've coded up a basic framework, so I know what my APIs are and have some evidence that they're reasonable. But unless I completely understand the interface beforehand, I start with coding to get my basic design. Testing can start once I generally have inputs mapping to outputs. If I'm not there yet, any tests I might have written might end up getting rewritten.

Collapse
 
leob profile image
leob

Well written & articulated!

The thing about TDD is that it forces you to write tests at all - you can't do TDD (obviously) without writing tests ... if you don't do TDD then who/what forces you to write them at all? It might (and will) be easily forgotten, you need discipline.

Nevertheless I agree with you - only use TDD when it makes sense, because in some cases it does, and in other cases it doesn't.

It should not be a religion.

Collapse
 
dburton90 profile image
Daniel Barton

I think only good reason for not using TDD is with long running integration tests. Other than that I really like programming against the tests - it makes programming really pleasant (especially debugging and refactoring). I personally hates the tests, but I know I can't live without them, so if I can use the test also for faster developing, I'll do that.
Usually I spent 1-2 hours to figuring out how to write first test for the feature I am trying to develop (what and how mock stuff, how the api will look like, etc...). But after that it's just pleasure :D. Just copy/pasting tests with small modifications and writing/refacoring code without any fear. I usually gain really good coverage without even focusing on that.

Collapse
 
netch80 profile image
Valentin Nechayev

All this means you don't use a true TDD but you just are writing tests. Well, this is really useful, unlike the true TDD which is the impractical religion.

Collapse
 
dburton90 profile image
Daniel Barton

That's true, what I am doing is not TDD by definition, but writing code against tests as much as I can. That's where I found most value. :)

Collapse
 
netch80 profile image
Valentin Nechayev

TDD is a religion which isn't fully followed in any practically important case. Strict following of it doesn't allow creating anything but a trivial loreless code from scratch. Modification of an existing code contradicts to it. Multiple requirement adjustments, R&D phase contradicts to it. More advanced testing approaches that a direct lowest level ("unit") functional testing contradicts to it. If anybody declares zhe uses TDD for a product larger than one-screen PoC, go checking where TDD principles are ignored.

OTOH, TDD is a good tool against lazy and cheating middle-level managers who tend to postpone any testing in order to declare feature release as early as possible. Am average Bill Lumbergh who releases a code without tests will violate not the abstract (for him) programming principles - he will violate administrative rules and so spoil own career. In that sense, requiring TDD in such a company, with proper emulation at lowest level, is a mean good:)

Collapse
 
jakub_zalas profile image
Jakub Zalas

Thanks for this post. It's good to be challanged from time to time. My experience is entirely different.

This company had one of the largest, most detailed sets of annex design books. Based on these design specifications the company built thousands of tests. We were supposed to pass a huge amount of tests with our system.

Seems like you're basing your opinion on an experience with a company that has not done TDD.

As a result, TDD over-emphasizes the “nice to have” unit tests, over the essential integration tests.

In my experience in TDD, unit tests are not nice to have. They're essential. Integration tests are still there, but we don't need a lot of them. They won't be end-to-end either. That's thanks to the design that TDD tends to encourage based on decoupled components. If I got each component to work and the integration between components, I will have enough confidence the system works as expected. Without bloated end-to-end integration tests. "Test a chain by testing every link."

One point where this really fails is in UI testing. Solutions like Selenium, etc. made huge strides in testing web front ends. Still, the complexity is tremendous and the tests are very fragile.

Yes, that's why people who practice TDD do not use Selenium as part of their workflow. Not on a large scale anyway. I'm missing what's your point here.

Integration tests are more important for quality in the long run.

How so? My experience is that internal quality of software is much better achieved with small, focused, micro-tests (both unit and integration). External quality is where acceptance tests shine.

Collapse
 
guillaume_agile profile image
craft Softwr & Music

What about the feedback time of E2E tests, and their poor reliability? (flaky tests)
blog.octo.com/en/the-test-pyramid-...

Collapse
 
starkraving profile image
Mike Ritchie

I also struggle to implement TDD for new green field code, but I love it for bug fixes. If there’s a reproducible bug yet all my tests are passing, it means I don’t have complete coverage in my unit tests.

If I can add new tests that fail while my existing tests pass, there’s a good chance that I have an idea about what the defective code is. And if my subsequent changes to the codebase make the new tests pass, there’s a good chance that I’ve got a valid fix.

Collapse
 
elsyng profile image
Ellis

4 points of thought:

  1. I think, ideally TDD is when you write the tests first, then write the code later (hence the second D=driven in TDD). In practice people write the tests during or after the coding: I am not sure that is TDD as such.
  2. Also I think, ideally: the tests should be written by someone else (a tester, not developer) whose job is creating tests.
  3. The necessity and value-for-money of TDD for backend and for frontend are very different, which is often ignored or not understood. Backend and frontend are both software applications, but they are two very different beasts. Things like security, data validation, and testing: very important for backend, not nearly as important for frontend, relatively speaking.
  4. People can also become less inclined to update the code if they will also have to fix existing tests, which are often more difficult to read. That is to say, tests can cause the codebase to age/deteriorate faster.
Collapse
 
thenickest profile image
TheNickest

Interesting post. I like that you seem to think thoroughly about concepts and if you should use them. However, some opinion shared I do not share.
Unit Tests come relatively cheap. That’s why they should be written, as they assure code does what it shall do on the lowest possible level. They shall run short and with no harm. I‘ve experienced many cases where, in a strongly typed language, colleagues tended to write the tests for the code they produced first, rather than for how it was specified. In the end the code did exactly not what it should have done, although all tests were green. So what went wrong? They did not see the tests as a highly supportive measure but as a necessary deed. Bad. Also, code coverage alone does not give any hint about code quality e.g. sticking to DRY, KISS, single responsibility, if something is logic - u name it principle smarter folks have come up with. This makes it a bad KPI. I can deliver you 100% coverage with crappiest code.
Why I write most of my tests first is because they come cheap (time), I can run them infinitely (and therefore my code) plus I really think of my goal like: what am I supposed to do with what I get and what shall I return.
Claiming that TDD does not improve your code is just as senseless as just stating it does. It depends highly on what you are doing with it.

Collapse
 
wiktorwandachowicz profile image
Wiktor Wandachowicz

"If we have a pre-existing system with tests, then TDD makes all the sense in the world. But testing a system that wasn’t built yet. There are some cases where it makes sense, but not as often as one would think."

For me that's a contradiction. How then to create a new system (that wasn’t built yet) to have tests, if we don't create tests as we go? Or maybe, sometimes, apply TDD approach?

My opinion on this is to use the mix of all necessary things. Write unit tests for the code written during workday. Write unit tests for poorly specified parts before coding, hence use TDD. Use help of testers, or just teammates, to check new (and existing) software artifacts. Deploy often, fix quick. Listen to your customers, adapt, be agile.