I got an interesting question during the TDD Dojo I held two days ago (I love those challenging questions). A student asked how can I know whether someone delivered a solution using TDD? Is there a way to tell? My answer was that there is a certain 'smell' in the repo if it was done by writing code first.
That certain 'smell' can be detected when examining the tests. Engineers who write code first tend to subsequently write tests that are coupled with the structure of the shipping code. Furthermore, they tend to write larger tests. It's rare to see microtests in code-first repos.
Because of that (tendency to write larger tests that are tightly coupled to the structure of the already implemented code), code-first repos tend to result in larger percentage of surviving mutants. Doing TDD properly results in code that is loosely coupled.
When I look at the repo done by teams that follow TDD, the first thing I notice is the simplicity of the tests. You can see that each test took no more than a minute or two to write. And you can see that tests are not interested in the structure of the code they are testing.
TDD discipline results in simple tests that are single-minded and only care about the values produced by the shipping code under test. Each test only talks to the interface/API, never to the concrete method.
That's why tests produced with TDD are not brittle & don't impede the development/refactoring. So the only reason we actually use TDD is to enable us to embrace change. We should be able to completely gut our system and experiment with implementation without disturbing our tests.
My answer to the student was: "At the end of the day it doesn't really matter how you get there, and if you can deliver decoupled system that is easy/risk-free to change without doing TDD, more power to you. But I have yet to see an example of that."