Until recently I was on an extreme programming team at the Humana DEC. Every workday we practiced test driven development (TDD). After 100 days, I want to point out some differences between TDD in theory and TDD in practice.
So with respect to the Three Laws of TDD here are my caveats:
- You don't need to test everything
- You can write more than one failure at a time
- You don't need to practice TDD at the nano cycle
- You should delay design decisions until the blue phase
- You should refactor tests too
Before you punch your screen allow me to elaborate.
You don't need to test everything
In theory, and according to the first law of TDD:
You can't write any code until you have first written a failing test.
In practice, I rarely write tests for content, design, configuration, etc. I write tests for any code that contains logic.
You can write more than one failure at a time
In theory, and according to the second law of TDD:
You can't write more of a test than is sufficient to fail.
In practice, I often write a few failures at a time. However, these are typically within the same test and always at the same level. That is a few unit tests failures or a few integration tests failures. Then I make them pass one by one.
You don't need to practice TDD at the nano cycle
In theory, and according to the third law of TDD:
You can't write more code than is sufficient to pass the currently failing test.
In practice, I follow TDD Law #2 and #3 when working with a new codebase or new technology. Once I am familiar, I write the failing test and code to pass in one cycle. I see no need to repeat the red-green cycle at the minimal pace [1].
You should delay design decisions until the blue phase
In theory, as noted in the third law of TDD, the green phase is about writing minimal code to make the test pass.
In practice, many people refactor during the green phase (or earlier). This is too early. To avoid refactoring during the green phase I call YAGNI on nearly everything. Delay design decisions until the blue phase. By then you'll have a better understanding of the code and tests to guide your refactor.
You should refactor tests too
In theory, all code should be refactored.
In practice, tests are rarely refactored. Tests are code too and should be refactored during the blue phase. Futhermore, when practicing TDD, tests serve as documentation. It is therefore equally, if not more important that you ensure the test code communicates clearly.
[1] While writing this post, I found a post by Uncle Bob in which he discusses the different TDD cycles. Much of the theory above operates on the nano cycle. What I have described in practice combines mostly the minute and later cycles.
Want more? Follow @gonedark on Twitter to get weekly coding tips, resourceful retweets, and other randomness.
Top comments (16)
The original detailed TDD description in Beck's TDD Explained book mentions changing test granularity depending on general confidence/momentum and calls it "switching gears". Think of it like changing gears on a bike; if you git a tough hill you can always work in smaller pieces until you gain momentum and then work in larger pieces.
I generally recommend people start with that book rather than most of the confusing descriptions that came after.
Great post!
Usually, when I am writing new code I try to define an interface before I write tests for it. I find that the tests start to fall out of it logically after the interface has been well-defined. I usually start with the question, "How do I want to invoke this task?" My metric is usually asking if a brand new developer with no knowledge of the project would be able understand what is happening from the interface alone. Basically, ensuring clean code.
This is something I see devs doing commonly. Especially when they try to make the problem fit into a design pattern initially, rather than refactoring to the pattern. Don't start with any design. Refactor to the design when you have enough information to make valuable decisions about the design, given the domain and the context.
I write code as clean as possible to make it self-documenting. I views tests in the same way. How do you use
Object
? Take a look atTestObject
. It's that simple, so long as you keep your tests refactored. And you are right, they are also code. And if we use tests like documentation, then we need to treat them as such. The only thing worse than no documentation is documentation that's wrong. In the same way, a bad test is worse than no test at all. And an outdated test is a bad test."You don't need to practice TDD at the nano cycle" - that makes sense, but in my experience, it's always a good practice to at least take the opposite direction once: do red-green in one cycle but make your test fail intentionally after that. I've caught many bugs because I assumed my test was right but it wasn't. And the green (without a red) was just a false-positive.
TLDR: I always make my tests fail to prove I am doing the right thing, it doesn't matter how obvious it all may seem, I am usually surprised by how often I am wrong. Thanks for sharing your experience!
“Never trust a test you haven’t personally seen fail.”
Best advice I’ve ever gotten. I don’t get bitten by false positives anymore.
Strongly disagree with #4. It's incredibly time-consuming to write code that will be refactored away in mere minutes because I know it will need an abstraction or the code fits nicely with a particular design pattern. The concept that you haven't cleared an invisible barrier yet doesn't justify claiming YAGNI applies.
It's test driven design, not test first design. Write as little code as you can justify. I don't need to wait until I do something the second of five times to create the abstraction.
This is one of those dogmatic practices that makes TDD seem like a pipe dream to many.
This is one of those things that's impossible to test and therefore prove either side of the argument. From my perspective, the long-term savings by not assuming a particular design pattern and having to rip it out or maintain it is worth the short-term lose in rewriting a few lines of code.
Nevertheless, I do agree that YAGNI is not about intentionally handicapping your ability to write code.
I feel that the approach should be to consider how you need your tests to provide feedback. If I have a pattern or abstraction in mind ahead of time, it's not because I've never used it before. By writing tests assuming the abstraction, I can get feedback about it faster.
When I say that the rule seems dogmatic, it feels like the "simple rules" that we talk about driving TDD are far too simplified. They lead people to think that you have to waste time writing the simplest code possible, then once you get to refactoring you have free-reign to do whatever you need. It seems to me that everyone is better off if you take a more pragmatic approach to where you're willing to start the process.
As long as your tests help you decide if your implementation is appropriate, it shouldn't matter if the idea of the abstraction came first.
Great piece. I agree with some of these conclusions. I think most TDD'ers would too.
In reality, these laws are usually bent to a certain degree. But there are rules for a reason.
The problem might arise when they're bent too much or too frequently.
From my experience, Nano cycles are also necessary and skipping them too often might lead to bugs not so easily discoverable after/during the blue phase.
Great summary on how to take TDD just seriously enough for it to be very useful without being too costly.
I've been using TDD "lite edition" for about ten years or so - I write just enough tests for the hard bits where I either don't know what I'm doing and/or am not sure I did it right (which is often).
I started out worrying about code coverage, but that seems like it adds more stress than is useful.
You should use tdd when your framework allow it(for configuration ,design)
Tdd is just about your feedback looo, otherwise you may need to deploy your app in order to test a little change set in conf.
I have to agree on most of this. I go through a storming phase where I like to get my code and concepts down before being able to identify the aspects which can be refactored into classes, helpers, variables etc.
There's maybe something in the type of brain I have that I need to see the rough outline of my functions before being able to have clarity on whether the direction feels right to go with. This phase then enables me to pull out the areas thst need testing focus, complex logic or core routines get covered the most. I'll write my failing tests and rebuild my functions to pass. I know this way that the logic is proven as far as possible but at a design level it also has a clear purpose in my overall architecture.
I find blindly writing my tests without the drafts means I end up stuck in test refactoring.
I agree with most of your points. Only thing that's concerns me are nano cycles. The smaller cycle you do, the less mistakes you make.
That's from me. Thanks for sharing.
Hi Jason!
What is the blue phase you're talking about?
I've heard about the green and red phases, but not about the blue one...
It's the color for "refactor" phase. It's not used often, maybe just in one of Uncle Bob's videos.
Thanks!
Which unit testing/mocking frameworks do you use?