DEV Community

Aman Agrawal for Coolblue

Posted on • Originally published at amanagrawal.blog on

Some Heuristics for Mocking vs State Verification in TDD

In this post I will share some of the TDD heuristics that I have found useful, on when you should use mocking to verify interactions and when should you resort to state verification. I am sure over time these will evolve and I might change my mind about of couple of them but for now these make sense to me. These heuristics are a blend of inspiration sourced from the resources mentioned at the end of the post, my discussions with peers and colleagues and my own experience of writing tests. Hopefully others will find them useful too.

I will be using Moq and FluentAssertions libraries for C# for creating test doubles and making assertions respectively. The framework for creating a test double is not important for this distinction, how and why one is being used is. There are different kinds of test doubles that you might need to use based on what you want to test for.

This post focusses on the unit testing section of the testing pyramid, integration tests (or any kind of higher level tests) are therefore out of the scope of this post:

The Testing Pyramid

So what are these 2 different kinds of unit tests?

State Verification: executes the behaviour of the system under test and asserts on the resulting state. In the example below, I am using a stub that my SUT is going to use to “store” the updated domain entity. I will then pull that entity back out and examine its properties.

Interaction Verification: executes the behaviour of the system under test and only asserts on whether or not its collaborators were invoked correctly for e.g.

Here I am using a mock for the token generator to simply verify that the appropriate method on it was called if the user was registered. I’ve set-up the stub for the membership repository to return a non-null instance of the registered user (i.e. pretend that the user is registered) so I can test the fact that token is generated before the controller returns.

I wouldn’t consider calls to databases for CRUD operations as collaborations per se because getting stuff in and out of databases is not domain behaviour. And since most use cases will do some form of database read/writes, verifying these interactions in every test will just add a lot of noise and will make tests hard to read and understand. Therefore there is no point in verifying these with interaction tests. You might want to write integration tests for these.

I view collaborations as usually being with other systems in the organisation built around different bounded contexts which your system needs to integrate with, to carry out a domain behaviour for e.g. inform the marketing department when a new customer registers on the portal. Tests to verify these interactions are useful.

Have both state and interaction verification tests but prefer state verification over interaction verification using mocks. State verification provides higher degree of confidence that the system did what it was supposed to, by observing the final state of the system. Contrast this with interaction tests that only verify that the underlying collaborator or module was invoked, giving no indication whether the invoked methods will actually do what its supposed to do.

Having just one kind of tests could force you to cram too much into one test. For e.g. make sure that the entity is updated AND its saved in the database AND the message is published AND e-mail is sent AND…

For stateless algorithms and calculations with fixed upper and lower bounds on inputs and outputs, use Property Based Testing. Vanilla style value assertions will do just as well. There should be no need to mock at this level!

for e.g. calculate simple interest given principal, rate of interest and time

For more complex calculations/use cases where you need collaboration from external sources consider stubbing the collaborator and have it return whatever value you need for the current test at hand. This will help scope the test properly and reduce the set up noise that would have resulted from setting up real dependencies for the test.

For e.g. test the calculation of current value of the foreign exchange investment portfolio using the current forex rate. Consider the example test below

I could even write a simple additional test using a mock, to make sure that the right method on the forex retriever interface is invoked. This test could actually be written right at the start when the actual calculation logic is yet to be implemented but we know that we’ll need the current forex rate.

Yes, that’s an overlapping test which could increase test noise, which is why I prefer the stub approach because the assertion would fail if I happen to be invoking the wrong method in the algorithm. The test using the mock will only be a starting point and can eventually be deleted once the stub version is in place.

For code that mutates the state of the domain model and also talks to external collaborators (for e.g. coarse grained use cases), write state verification tests by using stubs for collaborators like databases, web services, file system etc.

Assert on the value that got “stored” into these stubs after a state mutation action. Getting things into and out of databases is not behaviour but implementation details (as shown in the opening example of this post).

Write interaction verification tests (if needed) once you have covered the system under test with state verification tests and for the cases where state doesn’t need to be tested or cannot be tested.

These interaction verification tests should then only verify that the SUT talks to its collaborators in the way you expect. For e.g. consider the use case shown below that
– tries to load a customer from the database
– if customer is found, then throws
– otherwise, registers the customer as a new customer, and
– publishes a notification message onto the event bus to notify downstream systems about the registration (part of collaboration)

I could write one test that will test all the paths through the code but that test will be riddled with too many asserts and will make it hard to understand the purpose of the test and over time as the code grows the test will become harder and harder to maintain because the data set up will become ever more complicated.

I would also end up with mock verifications and state verifications all mixed up in one test and things could get really confusing really fast.

Instead, what I will do is write 3 tests:

  • First one to make sure that the customer gets added with the right details i.e. state test.
  • Second one to make sure that the exception is thrown if the customer is already registered, and
  • Final one, to verify that eventBus is invoked when a customer is added to the repository. For this I could use a mock object because I know I’ve already got the state verification tests covering the actual behaviour.

Push comes to shove I can refactor the last test to instead use a callback (in Moq) to assert that Publish was called with the right object which will essentially convert this too into a state test:

In either case, I can keep each test focussed on testing one logical thing such that the test will only break when its corresponding code is changed.

Long and complicated test set ups are often an indication of a convoluted design than a convoluted test. This is more common in existing code/tests bases than the one you write fresh.

refactor the code under test so it doesn’t do as much. Tests will then automatically take care of themselves. If the test set up is still complicated, refactor it to only keep what directly participates in assertions and encapsulate everything else into helper/builder methods. Builder pattern to create test data for domain entities can be very useful in this regard.

General TDD Hacks:

  1. If you are working on a legacy system with an existing test suite (if you are lucky), try commenting out some random production code and running the tests. If the tests you don’t expect to fail fail, then you have tests that are either testing too much or assertions need refactoring. Address these by splitting tests into mulitple smaller ones or fix the assertions.
  2. Include a readable custom reason message in your assertions where possible, for when they fail. Reading the default “Object reference not set to an instance of an object” is really ambiguous and makes the testing experience more frustrating than reading “Order was not found in the order list”.
  3. Strive to have only one (logical) assertion per test otherwise you will end up testing too much in a single test and that will cause it to break for unrelated reasons. I find asking myself the following 3 questions, helps me shape my tests a lot better than just writing the test and then wondering afterwards:

What is my SUT here? There should be only one and it should be obvious, if not, then my code is probably not modular enough. I will write the test from the perspective of my would be SUT, and then refactor the code to fit that design. That way I have some safety net when I refactor the rest of the code.

What behaviour am I testing for? If its not readily obvious, then may be I’m testing at too high granularity a level, I’ll split up the module/use case and test at a lower granularity level. This usually works! Avoid the temptation to write one test that tests all the way from a controller to the external collaborators. It will be near impossible to understand and maintain & eventually it will lose its value as a test.

Does my test set up directly participate in or contribute to the assertions in the test? If the answer to this question is NO, then my set up is too complex and I’ll simplify by encapsulating it away in set up methods or removing them. This has been really helpful in reducing noise in the tests!

I’ve found the following resources quite useful in helping me understand the errors of my ways and help shape these heuristics.

Top comments (0)