In the mocks vs. no mocks debate, I’ve seen a complete ignoring of nuance which I think causes confusion; it sure confuses me at least. Examples include “you don’t need mocks, only stubs” or “test behavior, not implementation”.
In something like Elm or Haskell, it make sense where you simply do not need mocks because the code has no side-effects. This means the only inputs you need are fixtures. Types aside, unit testing things that types can’t really easily help with is way more enjoyable and straightforward. The tradeoff is you need some form of acceptance tests to actually run and verify the side-effects are happening. For Elm, that’d be elm-test for Unit tests, and Cypress/Playwright for Acceptance tests.
In the opposite extreme, such as TypeScript and Angular using OOP classes, or even in imperative languages like Go, you’ll see it normalized to orchestrate side-effects, in some methods/functions. Testing this code often ignores return values. Instead you’ll setup mocks for the side-effects, run the code, and assert the mocks were invoked with the correct data. The tradeoff is the unit tests require more code to setup and assert mocks, and they’re more likely to break if you refactor the class method/function since the abstraction just operates on the side-effects. This negates one of the values of unit tests; allowing you to change the code without affecting the tests because they test the behavior.
The Functional Programmers will say things like “test the code, not the mocks”. They can only say that though because in their context, they don’t need mocks. When they’re forced to use a non-FP language, or even one allows side-effects (ReScript, Elixir, etc), they seem to fall into 1 of 2 camps: Camp 1 will have 2 different unit tests to test the return value, and another for the side-effects using mocks. Camp 2 will just test return values using stubs instead of mocks, and then use Acceptance Tests for the side-effects.
So when FP devs say “You don’t need mocks”, but they’re not in a language like Elm or Haskell, I question how they know the code actually works. Are they just assumed to have 2 different test types? Are they just assuming to use Acceptance Tests? Do they assume their types are good enough?
Conversely, OOP devs will say things like “test behavior, not implementation details”, but the language they’re in doesn’t really enforce/allow that like Elm/Haskell, I question how they’re allowing that. You can also tell which of those articles citing that quote were written by FP devs moving to OOP code bases, saying things like “test the return value”, but then completely glazing over or ignoring the above points I’ve brought up. Are they implementing something like Pure Core, Imperative Shell? Are they architecture astronauting into something like Hexagonal/Onion architecture, claiming that it just handles those concerns? Do they just accept that testing in languages which allow side-effects intermingled with code is the way things are?
Top comments (0)