I am a huge fan of Docker and I recently finished setting up my own environment for local development that I can easily port for production. I have to mention that it was heavily inspired by Laradock.
- Clone my repository.
- Follow the README.md.
Docker Containers for Local Development
Firstly, here's a list of things you'll need (I may be biased towards Linux as I use Manjaro as my daily driver):
docker-composeare necessary (I mean, docker containers, right?
- A local DNS server such as
- An SSL certificate generator such as
- Managing Services
Take time to go through the…
Here's the stack I use (I may be biased towards Linux as I use Manjaro as my daily driver):
docker-compose(I mean, docker containers, right? 🤷♂️)
- A local DNS server such as
- An SSL certificate generator such as
- Traefik as a proxy server
I use environment variables to keep configuration out of the code so the
.env.example file contains default values which I start with and create a
.env file from.
cp .env.example .env
docker-compose.example.yml contains defaults that will be in the
docker-compose.yml file. This file works hand in hand with the
cp docker-compose.example.yml docker-compose.yml
docker-compose.yml configuration makes use of two networks external to the docker-compose project, namely,
backdocker. I create the two networks using any IP range I want, keeping in mind that I'll have to update IP addresses in the
docker network create --subnet 192.168.90.0/24 --gateway 192.168.90.1 backdocker docker network create --subnet 192.168.0.0/24 --gateway 192.168.0.1 dockernet
dnsmasq makes it easier to have services and projects running under an optional domain name on my local machine. It's pretty much like having an automated
/etc/hosts file. Once I have a domain set up, I don't need to worry about adding subdomains. A set up guide is available here.
The domain I use for my local development is usually
local.test as can be seen in my
After completing the configurations, I run the following commands:
dnsmasq --test # to confirm the syntax of the config file sudo systemctl enable dnsmasq # to enable the dnsmasq service sudo systemctl start dnsmasq # to start the dnsmasq service (or restart if it was running before)
I had to make changes to my
resolvconf.conf so that I could browse external websites. Below is the final result of the changes:
After making said changes I run the following command and restarted the
sudo resolvconf -u # updates resolv subdirectories
And that's it, as long as I have a service running at port
80, the domain
local.test will resolve to it without needing to touch my
hosts file. To see how I handled services running at other ports, keep reading! 😉
mkcert is an awesome tool I use for local SSL development. According to the developers of the tool:
mkcertis a simple tool for making locally-trusted development certificates. It requires no configuration.
Installation, and instructions on setting up can be found at this GitHub repository.
A simple zero-config tool to make locally trusted development certificates with any names you'd like.
mkcert is a simple tool for making locally-trusted development certificates. It requires no configuration.
$ mkcert -install Created a new local CA 💥 The local CA is now installed in the system trust store! ⚡️ The local CA is now installed in the Firefox trust store (requires browser restart)! 🦊 $ mkcert example.com "*.example.com" example.test localhost 127.0.0.1 ::1 Created a new certificate valid for the following names 📜 - "example.com" - "*.example.com" - "example.test" - "localhost" - "127.0.0.1" - "::1" The certificate is at "./example.com+5.pem" and the key at "./example.com+5-key.pem" ✅
Using certificates from real certificate authorities (CAs) for development can be dangerous or impossible (for hosts like
127.0.0.1), but self-signed certificates cause trust errors. Managing your own CA is the best solution, but usually involves arcane commands, specialized knowledge and manual steps.
mkcert automatically creates and installs a local CA in the system…
I created a bash script to help with using
mkcert once I installed it for the sole purpose of creating SSL certificates. It's available in my repository. The script itself is heavily commented and can be used to install and create a domain certificate all at once.
You could explore the script althouth the
mkcert repository documentation will solidify your knowledge of the tool. I will, however, show you how I used it below.
mkcert -install # to create a new local CA
To create certificates that I used with my Traefik container, I run the following command from the root of my repository.
The reason I used a wildcard domain is because of how I use service names as subdomains. Using a wildcard certificate will allow the creation of one certificate for a number of subdomains.
When creating websites that should be mobile-first, I like to have the experience of entering qualified domain names. But for me to implement SSL, and to have the certificates trusted on my mobile device(s), I had to have the root CA installed on my device(s) as well. It is the
rootCA.pem file in the folder printer by the command
mkcert -CAROOT. The developers of
mkcert explained it rather well in their documentation. In a nutshell:
On iOS, you can either use AirDrop, email the CA to yourself, or serve it from an HTTP server. After installing it, you must enable full trust in it.
For Android, you will have to install the CA and then enable user roots in the development build of your app.
As explained by
mkcert developers, I had to set the
NODE_EXTRA_CA_CERTS environment variable. I had the following command appended to my
~/.bash_aliases file so that it run in every terminal:
export NODE_EXTRA_CA_CERTS="$(mkcert -CAROOT)/rootCA.pem"
Now, onto the best part, the proxy server! 🤩
I use Traefik in development and production, as well. I find it easier to transition projects that way since the only difference between environments is just a configuration file. Additionally, I only ever have to expose port
80 or port
443 to the internet for any of the services I have, whether in production, or development.
- Within the traefik directory, there is a
.env.examplefile that I copied to a
.envfile similar to the overall configuration step. The only difference was that this
.envwas private to the Traefik container.
- Depending on the environment, either the
traefik.production.yamlfile were needed to be copied to a
traefik.yamlfile. Of course, since I was dealing with local development, I had to go with the former. It contained the configurations for the Traefik container.
cp .env.example .env cp traefik.development.yaml traefik.yaml
Traefik has great documentation on their website that goes in-depth into my configurations.
There were a few things I had to do first. Going forward every command was run within the
- Create the
dynamicdirectory where all the routes are stored and configured for Traefik to 'see'.
- In order to use SSL with Traefik, it needs to know where the SSL certificates are. Since I generated it with the
generate.shscript they were already in the right directory. I then copied the
tls-certificates.ymlfile from the
example-dynamicdirectory to the
- I had to copy the
traefik-service.ymlfile from the
example-dynamic/servicesdirectory into the
dynamicdirectory so that I could interact with the Traefik dashboard at the URL specified in the
For docker projects, I used
example-dynamic/services/container-service.yml as a template.
For locally running services, such as the ones using a process manager such as [PM2][pm2-manager], I used
example-dynamic/services/http-service.yml as a template.
For middleware, I only made use of two,
basic auth and
https-redirection. To enable middleware, they had to be in the
Then, in the
middlewares array in a service router configuration, I listed the particular middleware with its middleware name, appending
@file to it as the middleware configuration is contained in a file e.g.
The basic auth middleware needed an array of users in the
user:password format that had to be created using the
htpasswd command. Any dollar signs in the resulting hash had to be doubled for escaping. That can be done with the following command:
echo $(htpasswd -nb $USERNAME $PASSWORD) | sed -e s/\\$/\\$\\$/g # user:$$apr1$$XreceAun$$aWg8Y/AUo0CJDeFixyRuT0
Edit 10/01/2021: There's no need to escape the dollar signs. The pipe can be ignored so the final command is as below:
echo $(htpasswd -nb $USERNAME $PASSWORD) # user:$apr1$XreceAun$aWg8Y/AUo0CJDeFixyRuT0
To manage the services, the following commands had to be run in the same directory as the
To run a service:
docker-compose up -d serviceName
serviceName is the name of a service in the
docker-compose.yml file under the
To stop a container service:
docker-compose stop serviceName
To destroy container services:
This setup helps me develop in an environment that differs ever so slightly from a production environment in an effort to keep deploying to production as effortless as possible.