Consider a project of around 10 Microservices, some very simple with no external dependencies and others more complex, depending on 2 or 3 other services:
How would you approach API Level Functional / Integration testing for each service?
- All the services are developed by the same Development Team. (for now)
- Development team is a normal scrum team ( ~ 5 API devs)
- Right now, we are using a simplified version of Git Flow. (Feature Branch -> Develop -> Master). Then end goal is to get closer of a CI / CD workflow but that will take time.
- Unit tests and static analysis being run on each PR, blocking the merge if failed.
- Basic set of E2E Frontend tests covering the critical use cases.
Some possible approaches:
Run the tests on each PR?
Makes sense for functional tests to also be executed on each PR?
Running the tests on each PR will allow to detect problems earlier but will be more complex for ops, since they would need to spawn a new instance of the service under test as well as all the required dependencies like databases.
Tools like Kubernetes can help on that, but still, its extra work for the Ops team.
We can use Data fixtures or similar to populate databases with test data, but what
about service dependencies? Mock the dependent services, or spawn a real instance of that service?
The 2nd option will be harder to do when you have many services with dependencies and also will be costly to launch all the required infra on each PR and I believe we should be able to test each service in isolation.
Using mocks for dependent services allows to test each service in isolation but will not guarantee that all the communication between services will work and also that it wasn't introduced any breaking change on the API responses. "Contract Testing" can minimize that risk tough.
Run the tests on dedicated "Integration Tests" environment?
Have a dedicated Integration Tests environment with all the services running and a set of data fixtures that should be "compatible" with all the services.
This is easy to operate from ops point of view (just a new environment) and could easily catch configurations errors like a service pointing to a wrong url from other service.
Also it would allow to detect breaking changes in service responses without the need for Contract Testing. But in this scenario we are testing all the services together, with a common set of data. I think each service should be able to be tested in isolation.
A mix of both
This would be probably the ideal solution, but it could be more complex to maintain and take more time to implement.
On each PR, spawn an instance of the service under test plus all the required Data structures (Ex: Database)
The services that the service under test depends on should be mocked.
Have contract testing to detect breaking changes.
A failure on this tests will block the merge of the PR until fixed.
After successful merge, have an integration test environment where all services are running and run the tests there.
My main question in this scenario is how to handle data fixtures on the integration test environment. I dont want to have different test suites for isolated functional tests and integration tests. That would be too costly to maintain for such a small team.
Lets start the discussion. What do you think would be the best approach?
I know functional Tests and integration are different kinds of tests and probably would make sense to have then both (similar to third scenario), but keep in mind the constraints. What do you think should be the priority? Functional or Integration? We are not Netflix and we would like to have the simplest possible workflow, that would give us more confidence on releases towards the end goal of CI / CD.