DEV Community

Cover image for What Alternatives Are There to Use in Unit Tests Instead of Mocks?
Anthony Fung
Anthony Fung

Posted on • Originally published at webdeveloperdiary.substack.com

What Alternatives Are There to Use in Unit Tests Instead of Mocks?

In our exploration of using mocks in tests, we’ve looked at quite a few topics in detail. We’ve covered setting up mocks to return values, changing their default behaviours, and verifying methods. While we can achieve a lot with mocks, they aren’t the only option available. In this article, we’ll look at three alternatives to mocking.

Nothing Beats the Real Thing for Accuracy

The first option is one we’ve already seen while discussing integration tests. In some respects, using real instances of dependencies is the best way to test that our code works. After all, it will be the same combination of modules used when the product ships. If the tests pass, there’s a very good chance the final product will also behave as expected. However, this isn’t guaranteed: there may still be differences (such as configuration settings, or model data) between test and production environments.

Before making every test an integration test, it’s important to remember that using real dependencies for testing has drawbacks. If a dependency’s functions are computationally expensive, a test could take longer than necessary to run. For example, imagine running a test where a dependency (i.e. not the system under test) performs a 30-second calculation; then imagine running a suite of 30 tests with the same dependency.

We also need to be mindful of dependencies using persistent storage. We might need to write additional routines to set up our databases and file systems before a test run. And teardown routines to run after, removing data written as part of testing. While this is important and recommended for the integration and end-to-end tests encompassing these systems, it’s an unnecessary overhead in the context of unit tests where these dependencies are broadly irrelevant to the test being run.

We should also consider overall test complexity. A dependency may have dependencies. And those may have dependencies of their own. As we continue to add more moving parts, tests become increasingly difficult to follow. With more code than necessary (for the logic being tested), errors become more challenging to pinpoint if/when tests fail.

But It Can Be Too Complex

For complex/expensive modules, one alternative is to use a fake – a simplified version of the dependency which isn’t suitable for production. As it’s specifically designed and written for testing usage, the implementation doesn’t need to be complete. Its dependencies can also be kept to a minimum for easy instantiation.

For example, a fake could use a simple lookup table in place of running an expensive calculation. The entries in the table might be known to be correct, but the number of lookups could be limited to a few. This would make the implementation sufficient for testing, but unsuitable for production.

One advantage of fakes over mocks is that they can be stateful – a potential use could be as a substitute for a data repository. Accessing real databases, filesystems, and APIs can be relatively slow compared to the time taken to run a unit test. Instead of using the real systems, a fake could use in-memory data structures (e.g. List/HashSet/Dictionary instances) to store and recall data. Furthermore, we wouldn’t have to set up permissions for accessing these systems in the tests, or write setup and teardown routines for clearing up.

Can We Simplify Further?

While a fake is a simplified version of a module, it still contains logic. Depending on the complexity of the original, we could accidentally introduce bugs during the simplification process. Or, we might have a change in requirements and forget to update the fake alongside the production code. It probably wouldn’t take long to realise the omission when tests fail, but it would save us time if we didn’t have to maintain simpler fakes.

When our tests don’t require much from their subjects’ dependencies, one option is to produce a stub by reducing the logic to simply returning a set value. While similar to creating and setting up a mock, a stub can be instantiated with a single statement. This could improve test readability if the equivalent mocks have many members (i.e. methods/properties) to set up.

Summary

Mocks are useful when writing tests, but they aren’t your only option. You can replace them and use classes that have been specifically designed.

Using instances of actual dependencies would give the most reliable results. But doing so can be impractical for reasons including:

  • The overall amount of code required.

  • The time taken to run the tests.

One alternative is to replace them with fakes: simplified versions of the module’s dependencies designed specifically for testing. Fakes can be stateful, and this is helpful in some use cases, e.g. simulating a data repository.

However, they still need maintenance. For less demanding tests, we can simplify fakes to the point where they simply return set values – while not as flexible, stubs are easy to instantiate and simple to understand.


Thanks for reading!

This article is from my newsletter. If you found it useful, please consider subscribing. You’ll get more articles like this delivered straight to your inbox (once per week), plus bonus developer tips too!

Top comments (3)

Collapse
 
gregorgonzalez profile image
Gregor Gonzalez

I've always had the same doubts, especially with configurations that are in a database. With mocks this dependency would be avoided and the application would not stop independently of the db. Now being important data for the proper functioning of the app, it should be mandatory to include it in the tests, but is not common to do that on this company. So I just follow their same pattern

Collapse
 
ant_f_dev profile image
Anthony Fung

Hi Gregor.

Thanks for sharing your experience. You're right - it can sometimes be difficult to know what's the "correct" test to write. I think that if it fits with your team, and you have the time, you could write both. That way, you'd have both:

  • a unit test that can run quickly with minimal dependencies.

  • an integration test that checks configuration settings or other important data. If the bigger tests take a while to run, you could always try scheduling them to run at a more convenient time, e.g. midnight every day.

Collapse
 
gregorgonzalez profile image
Gregor Gonzalez

Hi thanks for your reply, oh I understand perfectly, yes it would be ideal for both scenarios and have more control. We only test for CI/CD and that's why we keep it that way. I think that both are important and fulfill the purpose that I need