DEV Community

Gaurav Saini
Gaurav Saini

Posted on

How to build microservices with Docker - The Orchestration

Hello everyone!
So far in this series we have looked at the high level architecture of the app we're building and the code for the 4 services that are part of our simple shop application. And today, we'll look at the docker-compose files for running the full application and send in some requests to see if it goes boom 💥.

Again, the full code is in the Github repo if you want to jump straight to the code.

Let's get started


Before we begin

Okay, so before we start, I'll just quickly explain how I've organized my docker-compose files and why I did what I did.

I saw 2 approaches to writing the compose files:

  1. Putting all the services in a single file.
  2. Logically grouping the services in separate files based on their nature.

In the first approach, as the description suggests, we would have just one docker-compose.yml file and running the application would be super easy. Just do docker-compose up and that's it.
But, I found writing the services a bit cumbersome this way because I had to remember to add the depends_on option in many services definitions in order to avoid any breakdowns due to the services starting before the dependencies are running, or the Nginx Gateway looking for the services and finding nothing.

To avoid adding all the dependencies manually in the compose files, I decided to go with the second approach and execute each file one-by-one in a particular order. You'll see what I mean in a minute.

This way, we just have to ask ourselves one question before writing a service definition

Is the service an application dependency, a setup/pre-requisite thing, or a part of the application code (like the products service or the orders service)

and then, put the service definition in the corresponding docker-compose file.

I personally find this approach much better in the long term from a maintainability POV.

And with that out of the way, let's get started for real this time.


Setup and Pre-requisites

This file is dedicated to any items that we need to take care of even before we start with running the application. Such as setting up the networks, volumes, migrating and seeding databases, etc. I'm creating a network and a volume, like so:



version: '2'

networks:
  shop-intranet:
    driver: default

volumes:
  shop-data:


Enter fullscreen mode Exit fullscreen mode

and then, to run this file



docker-compose --file docker-compose.setup.yml up


Enter fullscreen mode Exit fullscreen mode

Application Dependencies

Any tools/applications that we need to run before starting our main application code should be placed in this file, eg: databases, cache, message brokers, etc.
I'm using this file to start the nats-broker used for relaying messages between the orders and the notifications services.



version: '2'

services:
  nats-broker:
    image: nats:2.9-alpine


Enter fullscreen mode Exit fullscreen mode

and then, to run this file



docker-compose --file docker-compose.deps.yml up


Enter fullscreen mode Exit fullscreen mode

Application Services

This file is for our backend services, that we've written. Here, we'll put all our services that make up the application. A single service definition will look like this:



auth:
  extends:
    file: auth/docker-compose.yml
    service: app
  env_file: .env


Enter fullscreen mode Exit fullscreen mode

and we can also associate the services to the network and volume we created earlier, like this:



auth:
  extends:
    file: auth/docker-compose.yml
    service: app
  env_file: .env
  networks:
    - shop-intranet
  volumes:
    - shop-data:/app


Enter fullscreen mode Exit fullscreen mode

and then, to run this file



docker-compose --file docker-compose.services.yml up


Enter fullscreen mode Exit fullscreen mode

Nginx Gateway

This file contains our Nginx server, and will open up an entrypoint to our application backend from the outside internet.

First, let's look at the nginx.conf file where we put the configuration for our server. Most of the file is a pretty basic Nginx configuration. So, I'll explain the parts specific to our app.

This is how we define a server group.



upstream products {
  server microservices-shop_products_1:4000;
}


Enter fullscreen mode Exit fullscreen mode

A server group is a way to name an address (IP/hostname and port number) so that we can use it later in our remaining configuration. We'll look at how we can use this in a moment.
Here the hostname is the name of the running container and 4000 is the port number inside that container.

We can definer the other server groups for Orders and Products services in the exact same manner.

Then we can define the routing for our services. For that we define a location block. Let's start with the simplest one.



location ^~ /products/ {
  proxy_pass http://products/;
}


Enter fullscreen mode Exit fullscreen mode

Here, /products/ is the path prefix of the incoming requests and in the second line products is the name of the server group we defined earlier. So, this configuration piece means that whatever requests are coming in to /products/<anything>, forward them to the server group named products. The proxy_pass directive is used to forward requests is such a way.
One more thing, before forwarding the request, Nginx will strip the path prefix from the request, so an incoming /products/search request will reach the products service as only /search

Then, we have the orders service server group.



location ^~ /orders/ {
  auth_request /auth/verify;
  proxy_pass http://orders/;
}


Enter fullscreen mode Exit fullscreen mode

The new thing here is auth_request /auth/verify;. This line means that before forwarding the request to the orders server group, check for user authentication using the /auth/verify endpoint. So, Nginx internally makes a request to the auth service, and if the response is 200 OK, then the request is forwarded to the orders service, and if the response is 401 Unauthorized, then Nginx sends also back a 401 to the requester.

Finally, the most difficult one for me was the auth server group



location ^~ /auth/ {
  internal;
  proxy_pass http://auth/;

  proxy_pass_request_body off;
  proxy_set_header Content-Length "";
  proxy_set_header X-Original_URI $request_uri;
  proxy_set_header Authorization $http_authorization;
}


Enter fullscreen mode Exit fullscreen mode

The internal at the start means that this location is accessible by Nginx internally only. If we make any request to /auth/anything, we'll get back a 404 Not Found response.

Then, there's a bunch of gibberish looking stuff. Let's look at the meaning of each line one by one:

  • proxy_pass_request_body off; - Nginx will remove the request body before forwarding the request.
  • proxy_set_header Content-Length ""; - Setting the Content-Length header to blank. Since we've removed the request body, there's no point in having a non-zero content length.
  • proxy_set_header X-Original_URI $request_uri; - Set the X-Original_URI header to $request_uri, which is a variable provided by Nginx and the value is the current request URL. This is to let Nginx know where to forward the request once authentication is successful.
  • Authorization $http_authorization - Setting the Authorization header, because this is what we have to verify in the auth service.

This is it for the nginx.conf file. Now, moving on to the last piece of the puzzle.

Here's the service definition for the Nginx server.



version: '2'

services:
  nginx:
    image: nginx:1.25-alpine
    ports:
      - 80:80
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro


Enter fullscreen mode Exit fullscreen mode

We're mapping the local port 80 to the port 80 in the Nginx container. And the volumes section is used to replace the /etc/nginx/nginx.conf inside the container with our local nginx.conf file, because that's where Nginx looks for the configuration by default.
The :ro at the end is to make it readonly to prevent any modifications to the local file from inside the container. Simply put, the container can read the file, but cannot make any changes to it.

and then, to run this file



docker-compose --file docker-compose.gateway.yml up


Enter fullscreen mode Exit fullscreen mode

And Voila! 🤌, we have finally managed to run our microservices based application.


Let's make some requests


GET /products/?ids=

NOTE: We can skip the ids query parameter to get all the products.



curl http://localhost/products/?ids=2,5


Enter fullscreen mode Exit fullscreen mode

Response:



[
    {
        "id": "2",
        "name": "Half Sleeve Shirt",
        "price": 15,
        "keywords": [
            "shirt",
            "topwear"
        ]
    },
    {
        "id": "5",
        "name": "Sunglasses",
        "price": 25,
        "keywords": [
            "accessories",
            "sunglasses"
        ]
    }
]


Enter fullscreen mode Exit fullscreen mode

POST /orders/



curl http://localhost/orders/ \
    --header 'Authorization: secret-auth-token' \
    --header 'Content-Type: application/json' \
    --data '{
        "userId": "saini-g",
        "productIds": ["2", "4"]
    }'


Enter fullscreen mode Exit fullscreen mode

Response:



{
    "productIds": [
        "2",
        "4"
    ],
    "totalAmount": 33,
    "userId": "saini-g",
    "id": 2487
}


Enter fullscreen mode Exit fullscreen mode

Also, we get the logs in the console to verify that everything is running as expected:

New order placed logs


That was all for today. It was a long one, so thanks a lot and congratulations to everyone who stuck till the very end.

But wait! I have a small bonus for all you good learners 🤩. But for that you'll have to wait for the next part 😛.

I hope you enjoyed this series and learnt something new.

Feel free to post any questions you have in the comments below.

Cheers!

Top comments (0)