Acceptance testing to API's can be a difficult. Writing out the outputs can be time consuming. Manually checking that results match what is expected can be tedious. Tedious task lead to people skipping over them and doing them poorly. The best thing to do with tedious, time consuming tasks is to automate them. Snapshot testing is one option to do this.
Snapshot testing records the output of a system what that system is in a particular state. Later, after changes have been made to the system, the system is put back into the same state and the output is checked against the previously recorded out put.
Snapshot testing was popularised in the early days of React as a way to set up automated testing for front end development. In this case a web application is rendered with a given state and the resulting html code is recorded. When changes are made the application is rendered again with the same state and the result is compared to the previously recorded output (the snapshot). If they're the same the test passes, if they're different the test fails and the developer needs to decide if this is a valid difference, in which case they update the recorded snapshot with the new one, or if the change is a regression that needs to be fixed.
The popularity of these tests for front end development quickly waned because front ends are want to change. It is important that we are able to update a front end often so that the feel of a site is kept fresh.
The rate of change for APIs is different, particularly if they have multiple consumers. The returned shame needs to be much more static and resistant to change than a user facing web page. Because of this slower rate of change a snapshot can be taken to build automated acceptance testing for APIs. In place of rendering, API calls are made and the results are recorded alongside data about the request.
Snapshot testing breaks down into two phases, recording and replaying.
When recording the snapshots the initial payload and the results of any calls are stored as the snapshot. The test runner can be built in such a way that it optionally makes additional calls based on the data that is returned. The result of each of these calls is recorded and stored in a file. This file needs to be human readable and committed to version control.
It's a good idea to make it simple to re-generate snapshots in the case that something has intentionally been changed in the returned data structure to save from hand updating multiple snapshots.
The replaying phase is where the tests are actually run. In this phase the recorded files are loaded into the test suite and the requests are repeated. The test runner assert that the returned results match those on file.
Snapshot testing has some advantages over other types of testing. Being able to be set up around an API without having a clear understanding of what is happening within the API by recording inputs and outputs means that these are a good choice for adding testing to existing APIs.
These tests to a great job of ensuring that all components of an API that are required to complete a specific case work together as expected. This helps to pick up errors that may be missed when testing within the process.
This style of testing is high level and are an automated representation of a client using the API. As such they are likely to be:
- Slower to run, as they need to test an entire system.
- Provide lower per line coverage, as crafting tests that hit every line of code from an API entry point can result in a lot of tests.
- Be prone to breaking because of changes in the underlying data. This fragility means that snapshot testing is not the best type of testing for an environment where returned data is constantly changing. Even if the data is stable it is important to ensure that it is simple to regenerate snapshots if the test environment is not in control of the state of the data.
I think that this style of testing is great. It provides a level of confidence that helps to enable continuous deployment and removes tedious manual testing. That said it is not without it's down sides, slow execution time and low coverage means that these should not be the majority of the tests that are written.
This style of testing is best used to cover a few important cases that ensure that the application works together as a whole. Reliance on snapshot testing as the primary method of testing (or automated acceptance testing in general) will lead to test suites that are boated and costly to maintain. However when combined with a comprehensive unit and integration test suite snapshot testing can provide a productivity boon and quality safety net.