Hey there! I just wanna hear some more opinions on testing services in order to ensure they are communicating with each other as expected.
For the foundation of the discussion let's assume we have three separate services.
-
Service A
: Handles HTTP Requests, proxies requests toService B
; sends job requests toService C
-
Service B
: Returns a result based on data that has been processed byService C
-
Service C
: Processes some data and writes it somewhere else; receives job requests fromService A
Let's first talk through how a scenario could look like:
- User sends HTTP request to
Service A
-
Service A
tries to load data fromService B
-
Service B
tries to load data from a database - Data is non-existing in the database
-
Service B
informsService A
that the data is not available -
Service A
notifiesService C
that it should fetch the data -
Service A
informs User that the data is currently not available and he should retry later
Testing each service in isolation
We can (and probably should) test the public API of each service in isolation as a single unit.
We need to mock all the other services
If services are directly dependent e.g. by HTTP requests we need to spin up a server that sends back expected responses.
Will the service run in production?
How can we ensure that a Node.js application is still working when put inside a docker container for production deployment, e.g. we could have forgotten to install a dependency and it will fail on startup?
In order to guarantee this, we actually have to spin up the container and test its functionality before deploying it.
How can we ensure that two services are actually able to communicate with each other?
The tests could get out of sync because one person changed Service B
but forgot to apply the changes to the test of Service A
. The Service B
-mock for the Service A
-test is now wrong.
Both are no longer able to communicate with each other.
Testing each service in isolation actually gives us no guarantee that both services will be able to communicate with each other in production.
Testing the services together
In addition to writing tests for each service, we could also have some tests that spin up all our services and check whether they are able to deliver a result.
We now have to orchestrate different services and make sure they are all up and running (and healthy) before our tests start.
Luckily there are libraries like dockest (jest + docker containers) that help spinning up containers and to make sure the containers are ready before running the tests and other stuff like automatically failing tests in case a container unexpectedly dies.
However, CI duration can now drastically increase, because besides testing each container in isolation we now test them as a whole stack. It could increase even further in case the containers do some very complex and time-intensive tasks, like loading something from a data source, transforming it and writing it to a different data source.
One solution to that problem would be using smaller datasets than in production.
My Conclusion
Complex service interaction seems to be a very hard thing to test. Maybe we should try to avoid separating our products into different services, but sometimes there is no other option available.
We still need to do the testing in order to be sure our production system will not fail, due every service working fine in isolation but not in synergy with the other services.
I am curious about your input on this topic!
How do you ensure separate services are able to communicate with each other?
What tools do you use to do so?
Top comments (2)
That's a very thoughtful analysis!
I generally use a tool like Testcafe (I used to use Selenium) to test end-to-end service-to-service communication. But only for the happy path. For error scenarios - i.e. - a dependent service returns a
500
, I stub those using Sinon within the individual service.If the services can communicate with one another in a happy path scenario, and I have tests that prove they can handle each other's failures, that's all that matters IMO.
Yes, writing happy path tests makes a lot of sense to me! I am currently also only testing those!