I deal with brittle unit tests that I have to fix every time I change production code. I don't just deal with them, I write them as fast as I write production code.
To me this is far better than not having or writing unit tests. I still get many benefits, which include that writing unit tests guides towards writing cleaner, simpler code.
But there's another level I haven't gotten to yet, and I want to get there. Robert Martin's blog post Test Contra-variance provides the outline of something better, but I have no idea what the reality looks like.
Most of the time I create test classes that directly correspond to the classes they test. Breaking away from that seems like a first step, even if it's not technically any different, just because it breaks the habit. Instead of writing a
FooTests class to go with my
Foo, I write a
FooDoesX class with tests for that particular behavior.
That's surface-level, though. Under the hood it's essentially the same old test class. I can make it a little less brittle by reusing some setup, with the goal of minimizing code duplication within tests while still ensuring that the "arrange" for each test is explicit so that it's clear what behavior it's testing.
Can anyone provide an example, perhaps in a public repository, of test contravariance? I'm not looking for a general discussion of unit tests vs. integration tests or testing a single class vs. testing a class with its collaborators, although there's likely significant overlap. I'm trying to see if someone who understands exactly what Uncle Bob is talking about can elaborate or provide an example.
Top comments (10)
What I understand is that you should design your tests to be another client for your code... so if in your production code you have an explicit API to use for your client... you should also use that API for your tests...
I prepared an small example with the TicTacToe game. You can find it on gist.github.com/bhserna/882d5101bd......
In particular look how I used the internal class
Game::Boardbut I don't have a test for that class, because its behavior is already covered in the tests for the game.
This is a small example, but I hope you can find it useful =)
I appreciate the example, but could you just link to the source code instead of posting it? It's about seven pages of scrolling and I'm concerned that it might stifle additional responses.
Ready! I just remove the code from the post... It would be nice to see an example from you also =)
Thanks! I don't have one. I'm going to go back and read Robert Martin's post all over again.
I don’t think Uncle Bob’s article explains it all that well. I agree on many points, but I have some contentions:
1) he writes ‘must’, as in test code must be de-coupled from implementation code. To me this over-stresses the point. I think it would have been better to write along the lines that following DRY/SOLID will naturally decouple your test structure from your implementation.
2) he posits that tests are fragile if a small change to implementation code leads to a large change in test code. The logic of contra-variance itself does not necessarily mean that this is an issue! But the issue is not that contra-variance was broken, but that supposedly changing implementation code led to fragile tests. Surely the logic of TDD is that fragile tests are discovered when it becomes hard to change the description of the intended behaviour of the system via tests. Unwieldy tests should uncover refactoring opportunities.
3) what I think Uncle Bob is really getting at is eliminating test duplication. He stresses that as the system behaviour is changed and new tests are added that it is redundant to automatically add new tests when the code has been refactored if the existing tests already cover all the intended functionality. I would stress though that the point of TDD is that it should increase your confidence to make a change to a system. If your tests are fragile your confidence will be lower. My model would be to aim to have functional tests and/or integration tests and/or user acceptance tests which exercise the main functionality of the system, then yes, have tests aligned to various layers/modules of the code according to the code’s structure. What I would say, though would be that where tests are aligned to the code structure, they should be independent of each other and utterly expendable, i,e.can be deleted when the corresponding implementation code is no longer necessary. If a team member comes up with a new way to do something it should be possible to make it a drop-in replacement. If it is not possible then the code should be re-factored accordingly. If you think about your code as a system of sub-systems, then each sub-system should have its own test suite as if it could be extracted into its own module.
Don’t worry about test contra-variance itself, rather make sure your code is readable and that your tests are a tool to increase confidence in deployment. Using test contra-variance can be an important tool to improve confidence.
Maybe this can answer your questions: github.com/FagnerMartinsBrack/nsa-.... See the test files.
Unfortunately that link is 404
Oh shit I ended up deleting that project from Github on my last repositories cleanup. I'm not sure if I have it on a different PC to recover...
EDIT: Yeah I don't, that code is lost forever unless someone reading this comment has a copy and is willing to share...
You might check this article here: medium.com/@tacsiazuma/the-lost-be...
It focuses on refactoring, but test contra-variance also applied to do it.
Thanks. That does touch on it. What I'd really like, though, is to be beat over the head with it, without a trace of subtlety, and without diluting it by incorporating any other concepts.
When I was getting started (and even now) a big problem was that an article would explain something but make the context so complex that it was difficult to isolate the part I wanted so that I could apply it in another context.
Over time I've gotten better at seeing through that, but I'm still just as likely to just skip to the next article.