The most common question about serverless I get lately is how to test applications in a local environment. In this post, I'll try to explore an updated version of the testing pyramid and present my approach to testing serverless solutions.
SPOILER: I do not encourage rebuilding the cloud from scratch for local testing purposes, quite the opposite. Use the cloud to reliably test your apps and don't waste time with a traditional approach.
Everything should be unit tested. It's the cheapest way to establish if what we build is working and heading in a good direction. Fast feedback loop should be the top priority for every developer and nothing beats unit tests here.
I try to structure my code in a way that there are a lot of pure functions. Once side effects kicks in and mocks are needed I'm proceeding very cautiously, as too many mocks may lead to brittle tests.
I still maintain a lot of mock based unit tests, as those are invaluable for testing error scenarios (ever tried to force your database to fail?), but it is highly likely integration tests should take over if that's the case.
Using 3rd party APIs may be hard due to their complexity. Additionally, testing is about checking behaviour, so changing implementation details shouldn't make the test invalid (ever had to adjust mocks after a simple implementation change?). This is why package-level integration tests that can be run outside the cloud are invaluable.
Examples of such tests include:
- Using DynamoDB Local to check if my stores can fetch and save data correctly.
- Using Opensearch to build indexes and examine if queries can fetch proper data.
Please note that I'm not trying to make a convoluted setup and simulate cloud services locally, especially to make them talk to each other. That would be an actual anti-pattern that needs to be avoided at all costs. All I'm doing is validating if my isolated code, that is, for example, supposed to talk to a database, is able to do it.
This is where the fun starts and where the traditional 'local testing' approach doesn't cut it. To be sure all the resources our application creates can talk to each other, we need to be able to execute our code in the cloud. There is no way around it, as something as innocent as a lack of IAM permission can make the service unusable.
Service level integration tests are run on a deployed stack. Each developer should have the ability to on-demand deploy their own stack inside a testing account to run them. Continuous integration pipeline should also fire them on staging and other necessary environments.
Those kinds of tests trigger some process via an entry point (such as invoking a Lambda function, uploading a file to S3 bucket or sending a message to SQS queue), then check if processing was finished and certain actions were taken (for example if data was successfully saved in DynamoDB). All of those interactions and checks could be done with the use of AWS API.
One of the challenges I faced lately was being able to examine events that arrive at SNS topic from the inside of an integration test. Turns out there is no single API call that I could use, so I wrote
snstesting package to help me with the task.
To solve my problem I embraced the cloud to the fullest by creating SQS queue and its SNS subscription. After checking what messages arrived at SNS both ad-hoc resources are cleaned up.
No unit or integration test can verify if your customer can use the product you are working on, especially if it's a web application. This is why end-to-end testing plays a crucial role in the development process. There are at least a few tools worth exploring: teams I work with use Playwright and Cypress. End-to-end tests are frequently harder to construct and maintain, and may take a lot more time to run in comparison to unit tests. Don't be discouraged by it and make sure to cover at least happy paths.
prozz ⚡If you want to use serverless technologies, but still insist on testing everything together locally, then your mindset isn't there yet.15:03 PM - 26 Aug 2021
One of the often overlooked aspects of testing is how long it takes to run them inside a CI/CD pipeline. Making sure it's a quick process is one of my top priorities. Flaky tests and slow pipelines create a perfect excuse for tech debt to accumulate. Finding the right balance between number of valuable tests and pipeline execution time is essential. Quality tests aligned with the pyramid presented in this post and fast feedback loop will boost your confidence in delivering. This is how rock solid apps are born.