DEV Community

Dominik Weber
Dominik Weber

Posted on • Updated on • Originally published at domysee.com

Setting up a Reverse-Proxy with Nginx and docker-compose

Nginx is a great piece of software that allows you to easily wrap your application inside a reverse-proxy, which can then handle server-related aspects, like SSL and caching, completely transparent to the application behind it.

This is a cross-post from my personal website.

Introduction

Some aspects of web applications, like SSL encryption, request caching and service discovery can be managed outside of the application itself. Reverse-proxies like Nginx can handle many of those responsibilities, so we as developers don't have to think about it in our software.

Additionally, some software is not meant to be available over the internet, since the don't have proper security measures in place. Many databases are like that. And it is good practice in general to not make internal services public-facing that don't have to be.

All of that can be achieved with docker-compose and Nginx.

docker-compose

docker-compose is a neat little tool that lets you define a range of docker containers that should be started at the same time, and the configuration they should be started with. This includes the exported ports, the networks they belong to, the volumes mapped to it, the environment variables, and everything else that can be configured with the docker run command.

In this section I'll briefly explain how to configure the docker-compose features used in this article. For more details take a look at the documentation.

The main entry point is a docker-compose.yml file. It configures all aspects of the containers that should be started together.

Here is an example docker-compose.yml:

version: '3'
services:
  nginx: 
    image: nginx:latest
    container_name: production_nginx
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
    ports:
      - 80:80
      - 443:443

  ismydependencysafe:
    image: ismydependencysafe:latest
    container_name: production_ismydependencysafe
    expose:
      - "80"

As you can see, there are 2 images specified.

First nginx, with the name production_nginx. It specifies a volume that replaces the default Nginx configuration file. Also a mapping of the host's ports 80 and 443 to the container's ports 80 and 443 is defined.

The second image is one is one I created myself. It exposes port 80. The difference to the ports configuration is that they are not published to the host machine. That's why it can also specify port 80, even though nginx already did.

There are a few other configuration options used in this article, specifically networks, volumes and environment variables.

Networks

With networks it is possible to specific which containers can talk to each other. They are specified as a new root config entry and on the container configurations.

version: '3'
services:
  nginx:
    ...
    networks:
      - my-network-name

  ismydependencysafe:
    ...
    networks:
      - my-network-name

networks:
  my-network-name:

In the root object networks, the network my-network-name is defined. Each container is assigned to that network by adding it to the network list.

If no network is specified, all containers are in the same network, which is created by default. Therefore, if only one network is used, no network has to be specified at all.

A convenient feature of networks is that containers in the same one can reference each other by name. In the example above, the url http://ismydependencysafe will resolve to the container ismydependencysafe.

Volumes

Volumes define persistent storage for docker containers. If an application writes somewhere no volume is defined, that data will be lost when the container stops.

There are 2 types of volumes. The ones that map a file or directory to one inside the container, and the ones that just make a file or directory persistent (named volumes), without making them accessible on the file system (of course they are somewhere, but that is docker implementation specific and should not be meddled with).

The first type, volumes that map a specific file or directory into the container, we have already seen in the example above. Here is it again, with an additional volume that also specifies a directory in the same way:

version: '3'
services:
  nginx: 
    image: nginx:latest
    container_name: production_nginx
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
      - /etc/letsencrypt/:/etc/letsencrypt/
...

Named volumes are specified similar to networks, as a separate root configuration entry and directly on the container configuration.

version: '3'
services:
  nginx:
    ...
    volumes:
      - "certificates:/etc/letsencrypt/"

    ...

volumes:
  certificates:
...

Environment Variables

Docker can also specify environment variables for the application in the container. In the compose config, there are multiple ways to do so, either by specifying a file that contains them, or declaring them directly in docker-compose.yml.

version: '3'
services:
  nginx:
    ...
    env_file:
      - ./common.env
    environment:
      - ENV=development
      - APPLICATION_URL=http://ismydependencysafe
    ...

As you can see, both ways can also be used at the same time. Just be aware that variables set in environment overwrite the ones loaded from the files.

The environment files must have the format VAR=VAL, one variable on each line.

ENV=production
APPLICATION_URL=http://ismydependencysafe

CLI

The commands for starting and stopping the containers are pretty simple.

To start use docker-compose up -d.

The -d specifies that it should be started in the background. Without it, the containers would be stopped when the command line is closed.

To stop use docker-compose down.

Both commands look for a docker-compose.yml file in the current directory. If it is somewhere else, specify it with -f path/to/docker-compose.yml.

Now that the basics of docker-compose are clear, lets move on to Nginx.

Nginx

Nginx is a web server with a wide array of features, including reverse proxying, which is what it is used for in this article.

It is configured with a nginx.conf. By default it looks for it in /etc/nginx/nginx.conf, but it is of course possible to specify another file.

As a reverse proxy, it can transparenty handle two very important aspects of a web application, encryption and caching. But before going into detail about that, lets see how the reverse proxy feature itself is configured:

http {
  server {
    server_name your.server.url;

    location /yourService1 {
      proxy_pass http://localhost:80;
      rewrite ^/yourService1(.*)$ $1 break;
    }

    location /yourService2 {
      proxy_pass http://localhost:5000;
      rewrite ^/yourService1(.*)$ $1 break;
    }
  }

  server {
    server_name another.server.url;

    location /yourService1 {
      proxy_pass http://localhost:80;
      rewrite ^/yourService1(.*)$ $1 break;
    }

    location /yourService3 {
      proxy_pass http://localhost:5001;
      rewrite ^/yourService1(.*)$ $1 break;
    }
  }
}

The Nginx config is organized in contexts, which define the kind of traffic they are handling. The http context is (obviously) handling http traffic. Other contexts are mail and stream.

The server configuration specifies a virtual server, where each can have its own rules. The server_name directive defined which urls or IP addresses the virtual server responds to.

The location configuration defines where to route incoming traffic. Depending on the url, the requests can be passed to one service or another. In the config above, the start of the route specifies the service.

proxy_pass sets the new url, and with rewrite the url is rewritten so that it fits the service. In this case, the yourService{x} is removed from the url.

This was a general overview, later sections will explain how caching and SSL can be configured.

For more details, check out the docs.

Now that we know the pieces, lets start putting them together.

Setup Nginx as a Reverse-Proxy inside Docker

For a basic setup only 3 things are needed:

1) Mapping of the host ports to the container ports
2) Mapping a config file to the default Nginx config file at /etc/nginx/nginx.conf
3) The Nginx config

In a docker-compose file, the port mapping can be done with the ports config entry, as we've seen above.

    ...
    ports:
      - 80:80
      - 443:443
    ...

The mapping for the Nginx config is done with a volume, which we've also seen before:

    ...
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
    ...

The Nginx config is assumed to be in the same directory as docker-compse.yml (./nginx.conf), but it can be anywhere of course.

Cache Configuration

Adding caching to the setup is quite easy, only the Nginx config has to be changed.

In the http context, add a proxy_cache_path directive, which defines the local filesystem path for cached content and name and size of the memory zone.

Keep in mind though that the path is inside the container, not on the host's filesystem.

http {
    ...
    proxy_cache_path /data/nginx/cache keys_zone=one:10m;
}

In the server or location context for which responses should be cached, add a proxy_cache directive specifying the memory zone.

  ...
  server {
    proxy_cache one;
  ...

That's enough to define the cache with the default caching configuration. There are a lot of other directives which specify which responses to cache in much more detail. For more details on those, have a look at the docs.

Securing HTTP Traffic with SSL

By now the server setup is finished. docker-compose starts up all containers, and the Nginx container acts as a reverse-proxy for the services. There is just one thing left to set up, as this site so beautifully explains, encryption.

To install certbot, the client that fetches certificates from Let’s Encrypt, follow the install instructions.

Generating SSL Certificates with certbot

certbot has a variety of ways to get SSL certificates. There are plugins for widespread webservers, like Apache and Nginx, one to use a standalone webserver to verify the domain, and of course a manual way.

We'll use the standalone plugin. It starts up a separate webserver for the certificate challenge, which means the port 80 or 443 must be available. For this to work, the Nginx webserver has to be shut down, as it binds to both ports, and the certbot server needs to be able to accept inbound connections on at least one of them.

To create a certificate, execute

certbot --standalone -d your.server.url

and follow the instructions. You can also create a certificate for multiple urls at once, by adding more -d parameters, e.g. -d your.server1.url -d your.server2.url.

Automating Certificate Renewal

The Let's Encrypt CA issues short-lived certificates, they are only valid for 90 days. This makes automating the renewal process important. Thankfully, certbot makes that easy with the command certbot renew. It checks all installed certificates, and renews the ones that will expire in less than 30 days.

It will use the same plugin for the renewal as was used when initially getting the certificate. In our case that is the standalone plugin.

The challenge process is the same, so also for renewals the ports 80 or 443 must be free.

certbot provides pre and post hooks, which we use to stop and start the webserver during the renewal, to free the ports.

The hooks are executed only if a certificate needs to be renewed, so there is no unnecessary downtime of your services.

Since we are using docker-compose, the whole command looks like this:

certbot renew --pre-hook "docker-compose -f path/to/docker-compose.yml down" --post-hook "docker-compose -f path/to/docker-compose.yml up -d"

To complete the automation simply add the previous command as a cronjob.

Open the cron file with crontab -e.

In there add a new line with

@daily certbot renew --pre-hook "docker-compose -f path/to/docker-compose.yml down" --post-hook "docker-compose -f path/to/docker-compose.yml up -d"

That's it. Now the renew command is executed daily, and you won't have to worry about your certificates' expiration date.

Using the Certificates in the Nginx Docker Container

By now the certificates are requested and stored on the server, but we don't use them yet. To achieve that, we have to

1) Make the certificates available to the Nginx container and
2) Change the config to use them

To make the certificates available to the Nginx container, simply specify the whole letsencrypt directory as a volume on it.

  ...
  nginx: 
    image: nginx:latest
    container_name: production_nginx
    volumes:
      - /etc/letsencrypt/:/etc/letsencrypt/
  ...

Adapting the config and making it secure is a bit more work.
By default, a virtual server listens to port 80, but with SSL, it should also listen to port 443. This has to be specified by 2 listen directives.

Additionally, the certificate must be defined. This is done with the ssl_certificate and ssl_certificate_key directives.

  ...
  server {
    ...
    listen 80;
    listen 443 ssl;
    ssl_certificate /etc/letsencrypt/live/your.server.url/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/your.server.url/privkey.pem;
  }
  ...

These small changes are enough to configure nginx for SSL.

It uses the default SSL settings of Nginx though, which is ok, but can be improved upon.

Improving Security of Nginx Config

At the beginning of this section I should mention that, if you use the latest version of nginx, its default SSL settings are secure. There is no need to define the protocols, ciphers and other parameters.

That said, there are a few SSL directives with which we can improve security even further.

Just keep in mind that by setting these, you are responsible for keeping them up to date yourself. The changes Nginx does to the default config settings won't affect you, since you're overwriting them.

First, set

ssl_protocols TLSv1.1 TLSv1.2;

This disables all SSL protocols and TLSv1.0, which are considered insecure (TLSv1.0, SSLv3, SSLv2). TLSv1.1 and TLSv1.2 are, at the time of writing (July 2018), considered secure, but nobody can promise that they will not be broken in the future.

Next, set

ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DHE+AES128:!ADH:!AECDH:!MD5;

The ciphers define how the encryption is done. Those values are copied from this article, as I'm not an expert in this area.

Those are the most important settings. To improve security even more, follow these articles:

You can check the security of your SSL configuration with a great website SSL Labs provides.

Wrap up

In this article we've covered how to setup docker-compose, use its network and volume feature and how to set environment variables, how to use Nginx as a reverse proxy, including caching and SSL security. Everything that's needed to host a project.

Just keep in mind that this is not a terribly professional setup, any important service will need a more sophisticated setup, but for small projects or side-projects it is totally fine.

Amendment

Here are the resulting nginx.conf and docker-compose.yml files. They include placeholder names, urls and paths for your applications.

docker-compose.yml

version: '3'
services:
  nginx: 
    image: nginx:latest
    container_name: production_nginx
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
      - ./nginx/error.log:/etc/nginx/error_log.log
      - ./nginx/cache/:/etc/nginx/cache
      - /etc/letsencrypt/:/etc/letsencrypt/
    ports:
      - 80:80
      - 443:443

  your_app_1:
    image: your_app_1_image:latest
    container_name: your_app_1
    expose:
      - "80"

  your_app_2:
    image: your_app_2_image:latest
    container_name: your_app_2
    expose:
      - "80"

  your_app_3:
    image: your_app_3_image:latest
    container_name: your_app_3
    expose:
      - "80"

nginx.conf

events {

}

http {
  error_log /etc/nginx/error_log.log warn;
  client_max_body_size 20m;

  proxy_cache_path /etc/nginx/cache keys_zone=one:500m max_size=1000m;

  server {
    server_name server1.your.domain;

    location /your_app_1 {
      proxy_pass http://your_app_1:80;
      rewrite ^/your_app_1(.*)$ $1 break;
    }

    location /your_app_2 {
      proxy_pass http://your_app_2:80;
      rewrite ^/your_app_2(.*)$ $1 break;
    }
  }

  server {
    server_name server2.your.domain;
    proxy_cache one;
    proxy_cache_key $request_method$request_uri;
    proxy_cache_min_uses 1;
    proxy_cache_methods GET;
    proxy_cache_valid 200 1y;

    location / {
      proxy_pass http://your_app_3:80;
      rewrite ^/your_app_3(.*)$ $1 break;
    }

    listen 80;
    listen 443 ssl;
    ssl_certificate /etc/letsencrypt/live/server2.your.domain/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/server2.your.domain/privkey.pem;
    include /etc/letsencrypt/options-ssl-nginx.conf;
  }
}

Follow me on Twitter for more of my thoughts, articles, projects and work.

Top comments (21)

Collapse
 
claypaenguin profile image
Claypenguin

Hi,

nice article. There's only one thing I'm wondering.

Is the expose: "80" really needed? Because according to the docker/docker-compose docs containers that are connected to the same network should be exposed to each other. Or am I missing something here?

cheers

Collapse
 
michaelswerston profile image
MichaelSwerston

Very nice article, I find the amount of information on this website astonishing you can learn practically anything on the web these days. I covered some alternative use cases for residential proxies on my blog.

Collapse
 
assender profile image
assender

Very great and detailed article, which is easy to follow! Kudos for that! I would also add that it's not only important to know "how to" but also which proxy services to use. I reviewed some of the best residential proxy providers in the market and everyone are welcome to check that out if they need some clearance about it.

Collapse
 
samayo profile image
samayo

Hi, nice guide.

Does this work for localhost though? As in, if I want to host multiple sites in my local machine just for development purposes. I say this because, I have used few guides like the one your wrote, and they all worked on my VPS but not in localhost

Collapse
 
groomedgorilla profile image
Julian Zammit

Hi Dominik,

Great article! Thanks for that.

I've seen others requesting the codebase for this, but if you could share the final version of the nginx.conf and docker-compose.yml files that would be a great start to solidifying people's understanding of this topic.

I know it would help me loads :)

Cheers!

Collapse
 
domysee profile image
Dominik Weber

Hi Julian,

sorry it took so long, I'm sure you already figured it out, whatever problem you had.

But for what it's worth, I've added them now :)

Collapse
 
theincognitotech profile image
theincognitotech

Hi,

Great article, Dominik! Thanks for that. I would also recommend checking my review, put a lot of effort into it trying to review one proxy provider :)

Collapse
 
mornedup profile image
Morne Du Preez

hi

Like the article as it is clear and easy to follow.

I have followed your instructions as in the article up until the ssh cert setup.

So i have some issue where the reverse proxy complains that it does not have "events" in the configuration file.

Is there something that i missed somewhere?

side note:
your image was not available to use so i just replaced it with the hello-world image used to test if docker is working.

Collapse
 
domysee profile image
Dominik Weber

Hi, glad that it helped!

You probably already figured it out, you have to have an empty events section in the nginx.conf file.

I've added the complete nginx.conf and docker-compose.yml files at the end, which includes the events section.

I didn't want to go into the details of nginx, but this is definitely something I should've mentioned.

Sorry it took so long for me to reply!

Collapse
 
dneary profile image
Dave Neary

Hi,

How do you know if the cache works? I added an X-Cached header to HTTP responses with $upstream_cache_status but it is almost always MISS, even for static resources like Javascript files or images. I suspect there is a permission issue with the nginx process writing to the proxy_cache_path, but I don't know how to fix it. Should I use a volume?

Thanks,
Dave.

Collapse
 
aliuosio profile image
Osiozekhai Aliu • Edited

Servus Dominik,

nice well explained and short article...
can the whole code been seen somewhere in the web?

regards
Osio

Collapse
 
domysee profile image
Dominik Weber

Hi Osio,

Since it is split between the Nginx config, docker compose config, some console commands and cronjobs, I don't have any one file that contains all of it.

But I might create some example configs and a bash script that automates the setup when I get the time.

Dominik

Collapse
 
aliuosio profile image
Osiozekhai Aliu

Servus Dominik,

ich hab schon mal angefangen. Vielleicht kannst Du Dir ein For machen.

github.com/aliuosio/docker-web-proxy

Collapse
 
domysee profile image
Dominik Weber

Hi, I've (finally!) added full docker-compose.yml and nginx.conf files.

Hope this helps.

Collapse
 
usuario001 profile image
José Manuel

Thank, I'm finally understanding how proxy works

Collapse
 
kyawthetkhineais profile image
kyawthetkhineais

with container_name is set in docker-compose, the containers cannot be replicated in swarm mode? So how to customize the docker-compose.yml to replicate the containers in docker-swarm mode?

Collapse
 
fadiquader profile image
fadiquader

Wow!
Thank you Dominik.

Collapse
 
trevdev profile image
Trev

It's nice to see a guide that doesn't involve nginx-proxy (jwilder) and letsencrypt-companion.

Collapse
 
andypotter profile image
Andrew Potter

Excellent post. Exactly what I needed

Thanks

Collapse
 
boscorelly profile image
Camille Ollié

Thanks, you helped me a lot with docker :)

Some comments may only be visible to logged-in visitors. Sign in to view all comments.