See update summary at bottom of post for changelog.
Since 2016, certificate authority Let's Encrypt have offered free SSL/TLS certificates in a bid to make encrypted communications on the web ubiquitous. If you've ever bought a certificate, you'll know they're usually quite expensive, the process for verifying them is a pain in the gluteus maximus, and then they expire while you're on holiday causing an outage.
With Let's Encrypt, all of these problems fade away, thanks to the Automated Certificate Management Environment (ACME) protocol that enables you to automate of the verification and deployment of certificates, saving you money and time. ACME is an interesting topic in it's own right, and you can read more about the various verification methods (called challenges) here, but today I'm going to show you how to easily setup a reverse proxy with automagical certificate generation, verification, and deployment.
The first thing we're going to do is create a user defined bridge network called
service-network. I'm bad at naming things but this seems applicable.
docker network create --subnet 10.10.0.0/24 service-network
Next, we'll setup a reverse proxy. A reverse proxy simply accepts requests and proxies them onto another service based on routing rules such as which hostnames should go to which service containers. I imagine it looks something a little bit like this:
We'll use an image by Jason Wilder, jwilder/nginx-proxy. This is a well maintained project with a lot of documentation. We map the ports 80 and 443 into the container so that we can handle both HTTP and HTTPS connections. We have a number of volumes too, three are standard docker volumes, and one is the docker daemon UNIX socket.
You might notice that I often use long-form arguments in scripts. While we could easily save space and have this on one line, it doesn't help others who may have to maintain scripts in the future.
docker run \ --detach \ --restart always \ --publish 80:80 \ --publish 443:443 \ --name nginx-proxy \ --network service-network \ --volume /var/run/docker.sock:/tmp/docker.sock:ro \ --volume nginx-certs:/etc/nginx/certs \ --volume nginx-vhost:/etc/nginx/vhost.d \ --volume nginx-html:/usr/share/nginx/html \ jwilder/nginx-proxy
Ok, so now we have our reverse proxy, next we need to setup the Let's Encrypt companion, for which we'll be using Yves Blusseau's image jrcs/letsencrypt-nginx-proxy-companion. I've been using this flawlessly now for almost a year.
We map the same volumes to this container, though no ports are published. We set the
NGINX_PROXY_CONTAINER environment variable to match the name of our proxy container, and that's about it. This container will run a process whenever new service containers are detected, generating certificates and keeping them up to date.
docker run \ --detach \ --restart always \ --name nginx-proxy-letsencrypt \ --network service-network \ --volume /var/run/docker.sock:/var/run/docker.sock:ro \ --volume nginx-certs:/etc/nginx/certs \ --volume nginx-vhost:/etc/nginx/vhost.d \ --volume nginx-html:/usr/share/nginx/html \ --env NGINX_PROXY_CONTAINER="nginx-proxy" \ jrcs/letsencrypt-nginx-proxy-companion
Finally, it's time to add a service container. We'll use the
tutum/hello-world image, quite a popular one for testing. For this we're going to need a hostname with the DNS configured. We'll pretend to use
hello-world.example.com and pretend that we've setup an
A record pointing to the IP address of this server.
docker run \ --detach \ --restart always \ --name hello-world \ --network service-network \ --env VIRTUAL_HOST=hello-world.example.com \ --env LETSENCRYPT_HOST=hello-world.example.com \ --env LETSENCRYPT_EMAIL="email@example.com" \ tutum/hello-world
So let's take a look at what's going on here. We set up a proxy container which listens on ports 80 and 443. We then set up an ACME certificate container which will manage certs for us. Then we created a hello world container. We set the environment variable
VIRTUAL_HOST to our hostname, and this is picked up by the proxy in order to route requests through to this container. The requests are routed through our user defined bridge
service-network. We also set the environment variables
LETSENCRYPT_EMAIL which are picked up by the ACME container and used when acquiring the certificate. FYI, the two hostnames will always have to match (
All we have to do is add these three variables to a container, and it'll be detected by the proxy and ACME containers and in short order, it'll work.
Now a few things to note. Port discovery — how does the proxy know which port to use? The hello-world image we use exposes a port in the Dockerfile with
EXPOSE 80. This always takes precedence, so if the image has an exposed port, this will be used. But if your Dockerfile doesn't define an exposed port, or if you perhaps configure the port at runtime with an environment variable, e.g.
HTTP_PORT=8000, then you can use the
--expose argument to let the proxy know which port to use. If there are multiple ports available, you can specify which to use with the environment variable
Just to reiterate, Dockerfile
EXPOSE takes precedence over
So now, when you hit
https://hello-world.example.com, the DNS server returns an A record with your public IP, the browser then connects to this IP address on port 443 and in turn, the host routes that connection through to the
nginx-proxy container. This terminates the SSL and proxies the request through to the
hello-world container as a plain HTTP request.
Finally, there is another thing we can do. The proxy handles upgrading from HTTP to HTTPS, which is great, but sometimes you want to handle www/apex domain redirects. You can configure this in some domain registrar control panels, but then you end up with a different IP address for your hostname.
There is a better solution, and this time the image we'll use this time is actually one of mine, adamkdean/redirect.
The way it works is super simple. You simply setup another container with the
VIRTUAL_HOST set to the domain you want to redirect away from and configure it's destination. Take for example we have
example.com and we want it to always redirect to
www.example.com because we love the www.
We have our main image like so, bound to
docker run \ --detach \ --restart always \ --name example-website \ --network service-network \ --env VIRTUAL_HOST=www.example.com \ example/website
We then create the redirect companion container, bound to
example.com, with the environment variable
REDIRECT_LOCATION being set to our preferred destination and
REDIRECT_STATUS_CODE set as applicable.
docker run \ --detach \ --restart always \ --name example-redirect \ --network service-network \ --env VIRTUAL_HOST=example.com \ --env REDIRECT_LOCATION="http://www.example.com" \ --env REDIRECT_STATUS_CODE=301 \ adamkdean/redirect
What happens now is requests for
example.com hit this redirect container, which responds with
REDIRECT_STATUS_CODE and the
We can of course use Let's Encrypt with these.
docker run \ --detach \ --restart always \ --name example-website \ --network service-network \ --env VIRTUAL_HOST=www.example.com \ --env LETSENCRYPT_HOST=www.example.com \ --env LETSENCRYPT_EMAIL="firstname.lastname@example.org" \ example/website docker run \ --detach \ --restart always \ --name example-redirect \ --network service-network \ --env VIRTUAL_HOST=example.com \ --env LETSENCRYPT_HOST=example.com \ --env LETSENCRYPT_EMAIL="email@example.com" \ --env REDIRECT_LOCATION="https://www.example.com" \ --env REDIRECT_STATUS_CODE=301 \ adamkdean/redirect
In this case, the initial request for
example.com hits the proxy, which terminates the SSL and proxies it through to the
example-redirect container, which responds with a 301 to
www.example.com. The next request which is now for
www.example.com hits the proxy and is proxied through to the
example-website container, something like this:
I hope this helps you. This setup can work for single sites or for a number of sites. I use this in situations where setting up a large container platform is a little bit overkill (kubernetes for a blog is fine, right guys?)
It's super easy to update and deploy, and having used it for over a year now, it's working great. Thanks for reading.
Update (Feb 21, 2020): as requested, here is a
version: "3" services: web: image: example/website expose: - 8000 environment: HTTP_PORT: 8000 VIRTUAL_HOST: www.example.com LETSENCRYPT_HOST: www.example.com LETSENCRYPT_EMAIL: "firstname.lastname@example.org" networks: service_network: web-redirect: image: adamkdean/redirect environment: VIRTUAL_HOST: example.com LETSENCRYPT_HOST: example.com LETSENCRYPT_EMAIL: "email@example.com" REDIRECT_LOCATION: "https://www.example.com" networks: service_network: nginx-proxy: image: jwilder/nginx-proxy ports: - 80:80 - 443:443 container_name: nginx-proxy networks: service_network: volumes: - /var/run/docker.sock:/tmp/docker.sock:ro - nginx-certs:/etc/nginx/certs - nginx-vhost:/etc/nginx/vhost.d - nginx-html:/usr/share/nginx/html nginx-proxy-letsencrypt: image: jrcs/letsencrypt-nginx-proxy-companion environment: NGINX_PROXY_CONTAINER: "nginx-proxy" networks: service_network: volumes: - /var/run/docker.sock:/var/run/docker.sock:ro - nginx-certs:/etc/nginx/certs - nginx-vhost:/etc/nginx/vhost.d - nginx-html:/usr/share/nginx/html networks: service_network: volumes: nginx-certs: nginx-vhost: nginx-html:
Update (Feb 28, 2020):
I realised that I made a mistake when talking about
VIRTUAL_PORT. I've updated the document but just to clarify, you tell the proxy which port to use with the
EXPOSE command in a Dockerfile and the
--expose argument on the Docker command line interface. If there are multiple ports exposed, you can then use the
VIRTUAL_PORT environment variable to signal which exposed port to use.
Sorry about the mix up!
Update (Apr 22, 2020):
Added some missing backslashes to the secure redirect snippets. Oops!
If you enjoyed this article, you might enjoy my next one Debugging Docker Containers, where we delve into a few ways that you can work out just what is going on inside these black boxes we call containers.
A note on injecting docker.sock: the docker daemon UNIX socket (/var/run/docker.sock) gives access to docker, which in other words, is root access. Be sure you know what you're giving this too whenever you're running third party images.