loading...
Cover image for Enable HTTPS on your local Docker dev environment

Enable HTTPS on your local Docker dev environment

aschmelyun profile image Andrew Schmelyun ・6 min read

A few weeks ago I was working on an app that I had spun up locally with Docker, and realized that I needed to perform some requests to it over HTTPS. Until now, I had just used the default unsecured port 80 for all of my development sites. I ended up enabling HTTPS after a few hours of work, and after some frustration it turned out to be a pretty simple addition. So, I figured I'd make this short tutorial showing how I accomplished this, and how you can add it to your local Docker setup.

Note: If you'd like to skip this written tutorial and watch the video instead, you can find it here on my YouTube channel.

Let's get started!✨

Initial Docker setup

I'll be using Docker Compose throughout this tutorial, as it's a super simple way of setting up a Docker network and creating containers for our individual services. Chances are you'll be using more than one service when you create your local Docker environment, but for this tutorial we'll just be using one, an Nginx service.

Our docker-compose.yml file for this will look like:

version: '3'

services:
  nginx: 
    image: nginx:latest
    container_name: nginx
    ports:
      - "80:80"
    volumes:
      - ./src:/usr/share/nginx/html

And we'll also add in an index.html file in our src directory to show that our website is working:

<h1>Hello, world!</h1>

If we save that file, run docker-compose up -d --build in our project's root, and then navigate to localhost in our web browser, you should see your simple website displayed.

That's good! But, we need to be able to configure Nginx if we're going to add a certificate to it. Let's create a file called default.conf in an nginx directory in our project. We'll use the following basic configuration, changing our project's root so we know the configuration file is loading correctly:

server {
  listen 80;
  index index.html;
  server_name localhost;

  root /var/www/html;

  location / {
    try_files $uri $uri/ /404.html;
  }
}

And update our docker-compose.yml file to add in a new volume for the configuration file, and update the existing volume with the new web root path we specified above:

version: '3'

services:
  nginx: 
    image: nginx:latest
    container_name: nginx
    ports:
      - "80:80"
    volumes:
      - ./src:/var/www/html
      - ./nginx/default.conf:/etc/nginx/conf.d/default.conf

Bring our container network down with docker-compose down and then back up with docker-compose up -d --build to rebuild using the new Nginx configuration and volumes.

Navigating back to localhost in the browser, we should see our basic site again. This is good, as it means that our custom Nginx configuration file has loaded successfully!

The last thing I'm going to do before creating our certificate is associate this dev site with a custom local domain. First, I'll add that entry to my hosts file. On MacOS and Linux this is located at /etc/hosts and on Windows it's under C:\Windows\System32\drivers\etc\hosts.

At the bottom I'll add my localhost IP address, and a simple test domain:

127.0.0.1 ssldocker.test

Okay, I think our little dev environment is set up nicely and we can move on to the next step!

Creating a self-signed certificate

If you take a look at our docker-compose.yml file, we're building the nginx service from an image straight from the Docker Hub. This is great for easily getting set up with zero configuration necessary, but if we wanted to install anything in our container (packages, prerequisite software, etc) we'd have to do it manually each time the network is booted up.

Or, we can build the service off of a separate Dockerfile.

First, let's modify our docker-compose.yml file to reflect that change:

version: '3'

services:
  nginx: 
    build:
      context: .
      dockerfile: nginx.dockerfile
    container_name: nginx
    ports:
      - "80:80"
    volumes:
      - ./src:/var/www/html
      - ./nginx/default.conf:/etc/nginx/conf.d/default.conf

Our context is the directory to look in for our Dockerfile, and the actual name of it will be nginx.dockerfile. Let's create that file in our project, and add the following to it:

FROM nginx:latest

RUN apt-get update
RUN apt-get install -y openssl
RUN mkdir -p /etc/nginx/certs/self-signed/
RUN openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/certs/self-signed/ssldocker.test.key -out /etc/nginx/certs/self-signed/ssldocker.test.crt -subj "/C=US/ST=Florida/L=Orlando/O=Development/OU=Dev/CN=ssldocker.test"
RUN openssl dhparam -out /etc/nginx/certs/dhparam.pem 2048

I'll explain what each of these lines does.

  • FROM details what Docker Hub image the service will be built from. Just like when we used it in our docker-compose.yml file, we're specifying the latest version of Nginx.
  • RUN allows us to run commands on the container before it's spun up and opened to the Docker Compose network. These lines install openssl and create the actual self-signed SSL certificate.

So now all that's left to do is add it to our Nginx service.

Adding your certificate to Nginx

Let's open up our nginx/default.conf file again, and add the following to the bottom of it:

server {
  listen 443 ssl;
  index index.html;
  server_name ssldocker.test;

  root /var/www/html;

  location / {
    try_files $uri $uri/ /404.html;
  }

  ssl_certificate /etc/nginx/certs/self-signed/ssldocker.test.crt;
  ssl_certificate_key /etc/nginx/certs/self-signed/ssldocker.test.key;
  dh_param /etc/nginx/certs/dhparam.pem;
}

Note: It'll also be a good time to update the previous server block that we added before with the server_name ssldocker.test value.

Since we're utilizing the 443 port in our new server block, we have to expose it through the Docker Compose network. Let's open up our docker-compose.yml file and add the following in our nginx service's ports section:

ports:
  - "80:80"
  - "443:443"

Now, it's time to restart our container again. Running docker-compose down and then docker-compose up -d --build will take care of that. We can then go into our browser and navigate to https://ssldocker.test, and should be presented with an error!

Screenshot of Chrome displaying a connection insecure error

That's expected though. Most modern browsers will warn you about self-signed SSL certificates. We should be able to click the "Advanced" button at the bottom and proceed to our site.

Except, this might not work. For me, on MacOS, there's not a link to continue past this warning. Not until we make some quick modifications.

Bypassing Chrome's warning

First, we'll need to get the certificate for our local development site. Follow these steps:

  • Click on the large red Not Secure badge on the left side of your URL bar
  • Click the Certificate (Invalid) item
  • Drag the certificate from the pop-up that appears directly onto your desktop

If you open up Finder and navigate to your Desktop folder, you should see a ssldocker.test.cer file (or a different name depending on the test domain you choose). Double click on this to open up Keychain Access.

You're going to want to scroll until you find the entry with your test domain under the name column, and certificate in the kind column. Double click on this, and a pop-up will appear. In that pop-up, expand the Trust section and change the first dropdown from Use System Default to Always Trust.

Screenshot of keychain access open with a self-signed ssl certificate

After making this change, you'll be prompted to enter in your password. After you do that, exit Keychain Access and go back to your browser. If you refresh the page and go to your Advanced options, you should now see a link to bypass the error page and proceed to your site!

It's now being served through HTTPS, and you can verify that in devtools by checking that the request was made from port 443.

Next steps and alternatives

You should be able to use your local development site just as you would any other site served with HTTPS. If you're making API requests to your local Docker development site with something like a JavaScript framework, you'll most likely see less console errors or security concerns.

As an aside, if you'd like to streamline the SSL certificate creation process a bit, and remove those Not Secured errors from your browser, you could build a certificate using FiloSottile's mkcert package.

If you have any questions about anything in this article, or would like to see smaller/more frequent tips about web development, feel free to reach out or follow me on Twitter.

Posted on May 31 by:

aschmelyun profile

Andrew Schmelyun

@aschmelyun

Full-stack PHP developer passionate about Laravel, modern JavaScript, and growing hot peppers.

Discussion

markdown guide
 

I also use the mkcert to generate self-signed certificates in my development mode.

It's simple to generate self-signed certificates and it can also generate a single or wildcard domain name to bind with certificates :).

 

I found mkcert after someone commented on a reddit post I made about this article. Installed it the same day, and found it WAY easier to use. I'll probably make a follow-up video using it instead, because it's just so simple.

 

On using HTTPS in development, will it helo to use a custom domain name? Or just use localhost?

In production, what do you use, or do you Let's Encrypt every 3 months? I had found Nginx-le, but then I would need some Kubernetes orchestration?

 

I use let's encrypt on production as I found it to be useful without rolling your own SSL cert.

For your local development, I would use a development version of let's encrypt for https that is setup with docker. Which you can point it to a folder and swap it out when you ready for production deployment.

 

In more recent versions of nginx the directive "dh_param" has been renamed to "ssl_dhparam" this was a real 'gotcha' that had me stuck here.
Great tutorial thanks!

 

Look into caddy, automatic https. Dead simple configuration and works best for local dev.