Using containers for development has become a widespread practice. The common use case in development is for running services required by the application. Things such as installing Redis, MongoDB, or even Elasticsearch. Most of the time, developers rely on
docker-compose to define the whole set of services required for the application.
I've taken things further by creating container images with everything I need to develop in a given language. This includes (but is not limited to):
- Code editor
- Tracing tools
- Package manager
- Build tools
Even if you're using Docker for Mac, you still have a Linux-based environment to develop from. Its really good for getting familiar with Linux, which can then be applied to creating real production container images (among many other things).
Compared to using installer scripts, container images (in particular docker images) are really quick to get going on a new machine. This is because you only need to download the image with pre-compiled binaries. For example, I've had to compile the
cquery server from source for one of my images. If I was using an installer script I'd have to either host the compiled binary myself or compile it on each machine which takes quite some time (since you need to download clang and all that jazz).
If using an installer script, things might not work on a different machine because certain packages were installed in a different manner. For example, someone might've used a PPA to install it one machine, while on another machine it was installed as a
.deb archive or using the standard software sources. In contrast, once a container image is built it will just work on practically any machine.
Once the image is built, it won't break out of nowhere. This is unlike installer scripts; I've had many occurrences back when I was just using a git repository to store configurations and scripts where it would break based on when the installer would be run.
There are several things which you want to map from the host into the container. I've written a tool to make this easier called slipway. Before I wrote this tool I would use a shell script to initialize the container.
If you're not using Linux as the host operating system, things can get even more complicated.
For the uninitiated, building a docker image requires a configuration file (a Dockerfile) which defines steps to be executed in a container. After all of these steps are run, the container is "committed" as a new image.
Most images are based off another image. I recommend starting out with an Ubuntu base since it is very popular.
Add a file named
Dockerfile in an empty directory:
The image only comes with a root user. We'll need to create a user with appropriate permissions to run inside of the container.
# Feel free to change this to whatever your want ENV DOCKER_USER developer # Create user with passwordless sudo. This is only acceptable as it is a # private development environment not exposed to the outside world. Do # NOT do this on your host machine or otherwise. RUN apt-get update && \ apt-get install -y sudo && \ adduser --disabled-password --gecos '' "$DOCKER_USER" && \ adduser "$DOCKER_USER" sudo && \ echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers && \ touch /home/$DOCKER_USER/.sudo_as_admin_successful && \ rm -rf /var/lib/apt/lists/* USER "$DOCKER_USER" WORKDIR "/home/$DOCKER_USER"
Ubuntu core (which is what our image is based on) does not include much since it is built for "ready to ship" applications. Lets add some basic packages for development:
RUN yes | sudo unminimize && \ sudo apt-get install -y man-db bash-completion build-essential curl openssh-client && \ sudo rm -rf /var/lib/apt/lists/*
If you aren't using a GUI editor like VSCode or Webstorm, you'll probably want a program which can take a single shell session and split it into multiple ones. This is called a terminal multiplexer. I prefer tmux.
RUN sudo apt-get update && \ sudo apt-get install -y tmux && \ sudo rm -rf /var/lib/apt/lists/*
There's plenty of customization options which I could discuss here, but to keep the tutorial short I will be skipping over this step. Feel free to check out my project for riced up development environment ideas.
Now we need something to edit source code. I can recommend Neovim, but any editor will do.
RUN sudo apt-get update && \ sudo apt-get install -y neovim && \ sudo rm -rf var/lib/apt/lists/*
As above, I am skipping customization options.
In this example, we'll be installing NodeJs since most developers on the site use it. If you're using Python or some other language at this step you can install it instead.
RUN curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh | bash ENV NVM_DIR /home/$DOCKER_USER/.nvm RUN . "$NVM_DIR/nvm.sh" && \ nvm install --lts && \ nvm alias default stable
Install any additional tools for your language of choice at this step.
All you need is to build the image before you can run it.
In your terminal, run:
docker build -t development-environment .
The final command to run your environment can vary based on what you want to transfer over. Here is an example command:
docker run --rm -ti \ -v $HOME/workspace:/home/developer/workspace \ development-environment bash
For a more detailed version of this tutorial, see my repository's tutorial section.