Docker has become synonymous with containerisation. Although there are similar services out there, whenever I've heard someone talking about containers, they're talking about Docker.
This blog post will explain containerisation in a simple way, describe some Docker basics, and give three examples of why Docker is used to deliver software.
Containers are a way to build, run and share an application, along with everything that app relies on so that it can be shipped all together. The app and it's environment are isolated and packaged together - and these packages are standardised.
Containers are isolated environments, they don't depend on the setup of the outside world (at least, not much). This means when you run the container locally you can have very high confidence of it working in production.
Chris James, Docker Crash Course
Much like shipping containers, the container itself is standardised, making it simpler to use wherever in the world those standards are followed. This in turn makes it easier to build tools to support containerised code.
Having your code exist in the controlled environment of a container means you can expect your machine will run software the same as in production. You can be confident your tests will pass having seen them pass locally in the containerised environment.
Docker is a set of tools that make creating, running and sharing applications easier, by using containerisation.
A container is a running instance of an image; an image is like the blueprint you will create your containers from.
A configuration file called a
Dockerfile details how the image should build. The
Dockerfile is built in layers: you could use an existing image from the open-source registry Docker Hub as a base layer, and customise the configuration on top of this with the uppermost layers taking precedent. There are plenty of commands to build your custom configuration, including,
FROM(which specifies the existing image to configure on top of)
RUN(which indicates a executable to be run)
CMD(which specifies a default command for running a container)
💡 Imagine a game of four-in-a-row. If each column in the grid is a configurable setting, and each layer has its own coloured coins, the docker image will build from whichever colour is at the top of each column, ignoring any coloured coins below.
To use your image outside of your local machine, it will need to be saved in Docker Hub for example, or on your own network repo.
Docker has a walkthrough explaining how to setup, build, and run your image, and how to share it on Docker Hub.
I made a webpage for this blogpost, and put it inside an image, so it can be shared and run with Docker. To view my webpage in the container, I'll need to include the HTML and CSS files and any assets (I used one .png file) in my image. As it's a webpage, it needs a server too, so I used NGINX.
Here's how my simple four line Dockerfile looks:
FROM nginx COPY index.html /usr/share/nginx/html COPY style.css /usr/share/nginx/html COPY whale.png /usr/share/nginx/html
FROM nginxmeans we use the NGINX image as the base layer.
COPYtakes the file with the specified name, and copies it to a folder to be served, namely
My file tree looks like this:
_ whale |_ Dockerfile |_ index.html |_ style.css |_ whale.png
Since all my files are in the same folder as the Dockerfile, I don't need to specify the file path in the
COPY commands; the file name is enough.
With docker running, I used the
image build command in the command line interface (CLI) to build my image:
$ docker image build -t whale:1.0 . > Sending build context to Docker daemon 46.08kB > Step 1/4 : FROM nginx > ---> a1523e859360 > Step 2/4 : COPY index.html /usr/share/nginx/html > ---> 272650a1e45b > Step 3/4 : COPY style.css /usr/share/nginx/html > ---> 2d6e75b87f9a > Step 4/4 : COPY whale.png /usr/share/nginx/html > ---> e77091a86817 > Successfully built e77091a86817 > Successfully tagged whale:1.0
Then I ran a container to check my app worked:
$ docker run --name whale -d -p 8080:80 ruthmoog/whale:1.0 > XXXXX9e868b93e5f916241efac920ae19f884f92a84e0901a57e3d33742d5edf
In my web browser I navigated to
localhost:8080/index.html to check my page was displaying as expected (at
localhost:8080 is the NGINX splash page). Then, I stopped the container, using the first few digits of the container ID:
$ `docker container rm --force XXXXX` > XXXXX
docker ps to confirm there are no container processes running, or use
docker ps -a to see the status of all container processes (including any that
Expired or stopped).
After creating an account and a repo, I had to rename my image tag so that it included my repo name, and then push my image to Docker Hub:
$ docker image tag whale:2.0 ruthmoog/whale:2.0 $ docker image push ruthmoog/whale:2.0 > The push refers to repository [docker.io/ruthmoog/whale] > 82a9e16279d0: Pushed > 53fedf50da89: Pushed > 372393671d63: Pushed > 318be7aea8fc: Layer already exists > fe08d5d042ab: Layer already exists > f2cb0ecef392: Layer already exists > 2.0: digest: sha123:cc7da9e...4e91c2d75 size: 1571
If you want to run my image yourself (it has a spinning whale, 10/10 would recommend), you'll need Docker installed, https://www.docker.com/products/docker-desktop. You can find my whale repo at https://hub.docker.com/r/ruthmoog/whale.
With docker running, use the following command in the CLI to pull the image from the repo to your computer,
docker pull ruthmoog/whale:2.2. Run with
docker run -d -p 8080:80 ruthmoog/whale:2.2. (Use the 'tags' tab to find the latest image.)
-dflag is daemonised, so the container will run in the background until you tell it to stop
-pflag is specifying which port to publish to. Before the colon (
8080) is the Docker host port, and after the colon is the protocol port in the container (
80) which will be mapped to the host port.
ruthmoog/whaleis the built image you want to run from my repo (
ruthmoog) in the Docker Hub registry Then in your browser, navigate to
localhost:8080/index.htmland bask in the joy of a spinny whale.
Compared to containers, starting a VM is slow - minutes vs seconds - and requires greater memory and storage. Containers virtualise and share the OS, making them faster and efficient, with the feel of a lightweight VM - although they don't have such strong isolation.
Imagine that you need a new room for your house. Docker containers would add a new room to your existing house, sharing the house’s electricity, plumbing and heating. Virtual Machines, on the other hand, would build a new house every time a new room is required.
Alan Johnson, Building a MarkLogic Docker Container
Using VMs is a bit like building a small house inside your existing house.
Imagine that every time your database software releases a new version, you're going to need to keep the version number up to date all over your projects: in all your tools that connect to the database. What a bummer! If you're using containers, you just update the version your image points to in the configuration files, and life becomes a little easier. Containers will also allow you to keep some code on a different database version if you need to.
The standardisation of images is like a contract for deployment. Whatever electronics you want to run in your home, you expect to be able to plug them into the socket and work. Whatever the device does is contained within the device - and whatever the code does inside the container is up to you. This means you could rewrite your entire codebase in a new programming language, and without changing the infrastructure, just create another image to be run.
- Alan Johnson, Building a MarkLogic Docker Container | marklogic.com/blog/building-a-marklogic-docker-container/
- Chris James, Docker Crash Course | github.com/quii/docker-crash-course
- Doug Chamberlain, Containers vs. Virtual Machines (VMs): What’s the Difference? | netapp.com/blogs/containers-vs-vms/
- Docker | docker.com
- Docker Hub - NGINX | hub.docker.com/_/nginx
- webopedia, Containerization | webopedia.com/TERM/C/containerization.html