I deal with brittle unit tests that I have to fix every time I change production code. I don't just deal with them, I write them as fast as I write production code.
To me this is far better than not having or writing unit tests. I still get many benefits, which include that writing unit tests guides towards writing cleaner, simpler code.
But there's another level I haven't gotten to yet, and I want to get there. Robert Martin's blog post Test Contra-variance provides the outline of something better, but I have no idea what the reality looks like.
Most of the time I create test classes that directly correspond to the classes they test. Breaking away from that seems like a first step, even if it's not technically any different, just because it breaks the habit. Instead of writing a
FooTests class to go with my
Foo, I write a
FooDoesX class with tests for that particular behavior.
That's surface-level, though. Under the hood it's essentially the same old test class. I can make it a little less brittle by reusing some setup, with the goal of minimizing code duplication within tests while still ensuring that the "arrange" for each test is explicit so that it's clear what behavior it's testing.
Can anyone provide an example, perhaps in a public repository, of test contravariance? I'm not looking for a general discussion of unit tests vs. integration tests or testing a single class vs. testing a class with its collaborators, although there's likely significant overlap. I'm trying to see if someone who understands exactly what Uncle Bob is talking about can elaborate or provide an example.