Introduction
In this lab, we build on our knowledge from lab 1 where we used Docker commands to run containers. We will create a custom Docker Image built from a Dockerfile. Once we build the image, we will push it to a central registry where it can be pulled to be deployed on other environments. Also, we will briefly describe image layers, and how Docker incorporates "copy-on-write" and the union file system to efficiently store images and run containers.
We will be using a few Docker commands in this lab. For full documentation on available commands check out the official documentation.
Create a Python
App (without Using Docker)
Run the following command to create a file named app.py
with a simple python program. (copy-paste the entire code block)
cd ~/project
echo 'from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "hello world!"
if __name__ == "__main__":
app.run(host="0.0.0.0")' > app.py
This is a simple python app that uses flask to expose a http web server on port 5000 (5000 is the default port for flask). Don't worry if you are not too familiar with python or flask, these concepts can be applied to an application written in any language.
Optional: If you have python and pip installed, you can run this app locally. If not, move on to the next step.
$ python3 --version
$ pip3 --version
$ pip3 install flask
$ python3 app.py
* Serving Flask app "app" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
Open the app in a new browser tab using http://0.0.0.0:5000/
.
Create and Build the Docker Image
Now, what if you don't have python installed locally? Don't worry! Because you don't need it. One of the advantages of using containers is that you can build python inside your containers, without having python installed on your host machine.
Create a Dockerfile
but running the following command. (copy-paste the entire code block)
echo 'FROM python:3.8-alpine
RUN pip install flask
CMD ["python","app.py"]
COPY app.py /app.py' > Dockerfile
A Dockerfile lists the instructions needed to build a docker image. Let's go through the above file line by line.
FROM python:3.8-alpine
This is the starting point for your Dockerfile. Every Dockerfile must start with a FROM
line that is the starting image to build your layers on top of.
In this case, we are selecting the python:3.8-alpine
base layer (see Dockerfile for python3.8/alpine3.12) since it already has the version of python and pip that we need to run our application.
The alpine
version means that it uses the Alpine Linux distribution, which is significantly smaller than many alternative flavors of Linux, around 8 MB in size, while a minimal installation to disk might be around 130 MB. A smaller image means it will download (deploy) much faster, and it also has advantages for security because it has a smaller attack surface. Alpine Linux is a Linux distribution based on musl and BusyBox.
Here we are using the "3.8-alpine" tag for the python image. Take a look at the available tags for the official python image on the Docker Hub. It is best practice to use a specific tag when inheriting a parent image so that changes to the parent dependency are controlled. If no tag is specified, the "latest" tag takes into effect, which is acts as a dynamic pointer that points to the latest version of an image.
For security reasons, it is very important to understand the layers that you build your docker image on top of. For that reason, it is highly recommended to only use "official" images found in the docker hub, or non-community images found in the docker-store. These images are vetted to meet certain security requirements, and also have very good documentation for users to follow. You can find more information about this python base image, as well as all other images that you can use, on the docker hub.
For a more complex application you may find the need to use aFROM
image that is higher up the chain. For example, the parent Dockerfile for our python app starts with FROM alpine
, then specifies a series of CMD
and RUN
commands for the image. If you needed more fine-grained control, you could start with FROM alpine
(or a different distribution) and run those steps yourself. To start off though, I recommend using an official image that closely matches your needs.
RUN pip install flask
The RUN
command executes commands needed to set up your image for your application, such as installing packages, editing files, or changing file permissions. In this case we are installing flask. The RUN
commands are executed at build time, and are added to the layers of your image.
CMD ["python","app.py"]
CMD
is the command that is executed when you start a container. Here we are using CMD
to run our python app.
There can be only one CMD
per Dockerfile. If you specify more thane one CMD
, then the last CMD
will take effect. The parent python:3.8-alpine also specifies a CMD
(CMD python3
). You can find the Dockerfile for the official python:alpine image here.
You can use the official python image directly to run python scripts without installing python on your host. But today, we are creating a custom image to include our source, so that we can build an image with our application and ship it around to other environments.
COPY app.py /app.py
This copies the app.py in the local directory (where you will run docker image build
) into a new layer of the image. This instruction is the last line in the Dockerfile. Layers that change frequently, such as copying source code into the image, should be placed near the bottom of the file to take full advantage of the Docker layer cache. This allows us to avoid rebuilding layers that could otherwise be cached. For instance, if there was a change in the FROM
instruction, it would invalidate the cache for all subsequent layers of this image. We will demonstrate a this little later in this lab.
It seems counter-intuitive to put this after the CMD ["python","app.py"]
line. Remember, the CMD
line is executed only when the container is started, so we won't get a file not found
error here.
And there you have it: a very simple Dockerfile. A full list of commands you can put into a Dockerfile can be found here. Now that we defined our Dockerfile, let's use it to build our custom docker image.
Build the docker image.
Pass in -t
to name your image python-hello-world
.
docker image build -t python-hello-world .
Verify that your image shows up in your image list.
docker image ls
Note that your base image python:3.8-alpine
is also in your list.
You can run a history command to show the history of an image and its layers,
docker history python-hello-world
docker history python:3.8-alpine
Run the Docker Image
Now that you have built the image, you can run it to see that it works.
Run the Docker image
docker run -p 5001:5000 -d python-hello-world
The -p
flag maps a port running inside the container to your host. In this case, we are mapping the python app running on port 5000 inside the container, to port 5001 on your host. Note that if port 5001 is already in use by another application on your host, you may have to replace 5001 with another value, such as 5002.
Navigate to PORTS tab in the terminal window and click on the link to open the app in a new browser tab.
In a terminal run curl localhost:5001
, which returns hello world!
.
Check the log output of the container.
If you want to see logs from your application you can use the docker container logs
command. By default, docker container logs
prints out what is sent to standard out by your application. Use docker container ls
to find the id for your running container.
labex:project/ $ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
52df977e5541 python-hello-world "python app.py" 2 minutes ago Up 2 minutes 0.0.0.0:5001->5000/tcp, :::5001->5000/tcp heuristic_lamport
labex:project/ $ docker container logs 52df977e5541
* Serving Flask app 'app'
* Debug mode: off
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:5000
* Running on http://172.17.0.2:5000
Press CTRL+C to quit
172.17.0.1 - - [23/Jan/2024 02:43:10] "GET / HTTP/1.1" 200 -
172.17.0.1 - - [23/Jan/2024 02:43:10] "GET /favicon.ico HTTP/1.1" 404 -
The Dockerfile is how you create reproducible builds for your application. A common workflow is to have your CI/CD automation run docker image build
as part of its build process. Once images are built, they will be sent to a central registry, where it can be accessed by all environments (such as a test environment) that need to run instances of that application. In the next step, we will push our custom image to the public docker registry: the docker hub, where it can be consumed by other developers and operators.
Push to a Central Registry
Navigate to Docker Hub and create an account if you haven't already. Alternatively, you can also use https://quay.io for instance.
For this lab we will be using the docker hub as our central registry. Docker hub is a free service to store publicly available images, or you can pay to store private images. Go to the Docker Hub website and create a free account.
Most organizations that use docker heavily will set up their own registry internally. To simplify things, we will be using the Docker Hub, but the following concepts apply to any registry.
Login
You can log into the image registry account by typing docker login
on your terminal, or if using podman, type podman login
.
labex:project/ $ export DOCKERHUB_USERNAME=<your_docker_username>
labex:project/ $ docker login docker.io -u $DOCKERHUB_USERNAME
Password:
WARNING! Your password will be stored unencrypted in /home/labex/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
Tag your image with your username
The Docker Hub naming convention is to tag your image with [dockerhub username]/[image name]. To do this, we are going to tag our previously created image python-hello-world
to fit that format.
docker tag python-hello-world $DOCKERHUB_USERNAME/python-hello-world
Push your image to the registry
Once we have a properly tagged image, we can use the docker push
command to push our image to the Docker Hub registry.
docker push $DOCKERHUB_USERNAME/python-hello-world
Check out your image on docker hub in your browser
Navigate to Docker Hub and go to your profile to see your newly uploaded image at https://hub.docker.com/repository/docker/<dockerhub-username>/python-hello-world
.
Now that your image is on Docker Hub, other developers and operations can use the docker pull
command to deploy your image to other environments.
Note: Docker images contain all the dependencies that it needs to run an application within the image. This is useful because we no longer have deal with environment drift (version differences) when we rely on dependencies that are install on every environment we deploy to. We also don't have to go through additional steps to provision these environments. Just one step: install docker, and you are good to go.
Deploying a Change
The "hello world!" application is overrated, let's update the app so that it says "Hello Beautiful World!" instead.
Update app.py
Replace the string "Hello World" with "Hello Beautiful World!" in app.py
. You can update the file with the following command. (copy-paste the entire code block)
echo 'from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "hello beautiful world!"
if __name__ == "__main__":
app.run(host="0.0.0.0")' > app.py
Rebuild and Push Your Image
Now that your app is updated, you need repeat the steps above to rebuild your app and push it to the Docker Hub registry.
First rebuild, this time use your Docker Hub username in the build command:
docker image build -t $DOCKERHUB_USERNAME/python-hello-world .
Notice the "Using cache" for steps 1-3. These layers of the Docker Image have already been built and docker image build
will use these layers from the cache instead of rebuilding them.
docker push $DOCKERHUB_USERNAME/python-hello-world
There is a caching mechanism in place for pushing layers too. Docker Hub already has all but one of the layers from an earlier push, so it only pushes the one layer that has changed.
When you change a layer, every layer built on top of that will have to be rebuilt. Each line in a Dockerfile builds a new layer that is built on the layer created from the lines before it. This is why the order of the lines in our Dockerfile is important. We optimized our Dockerfile so that the layer that is most likely to change (COPY app.py /app.py
) is the last line of the Dockerfile. Generally for an application, your code changes at the most frequent rate. This optimization is particularly important for CI/CD processes, where you want your automation to run as fast as possible.
Understanding Image Layers
One of the major design properties of Docker is its use of the union file system.
Consider the Dockerfile
that we created before:
FROM python:3.8-alpine
RUN pip install flask
CMD ["python","app.py"]
COPY app.py /app.py
Each of these lines is a layer. Each layer contains only the delta, diff or changes from the layers before it. To put these layers together into a single running container, Docker makes use of the union file system
to overlay layers transparently into a single view.
Each layer of the image is read-only
, except for the very top layer which is created for the running container. The read/write container layer implements "copy-on-write" which means that files that are stored in lower image layers are pulled up to the read/write container layer only when edits are being made to those files. Those changes are then stored in the running container layer. The "copy-on-write" function is very fast, and in almost all cases, does not have a noticeable effect on performance. You can inspect which files have been pulled up to the container level with the docker diff
command. More information about how to use docker diff
can be found here.
Since image layers are read-only
, they can be shared by images and by running containers. For instance, creating a new python app with its own Dockerfile with similar base layers, would share all the layers that it had in common with the first python app.
FROM python:3.8-alpine
RUN pip install flask
CMD ["python","app2.py"]
COPY app2.py /app2.py
You can also experience the sharing of layers when you start multiple containers from the same image. Since the containers use the same read-only layers, you can imagine that starting up containers is very fast and has a very low footprint on the host.
You may notice that there are duplicate lines in this Dockerfile and the Dockerfile you created earlier in this lab. Although this is a very trivial example, you can pull common lines of both Dockerfiles into a "base" Dockerfile, that you can then point to with each of your child Dockerfiles using the FROM
command.
Image layering enables the docker caching mechanism for builds and pushes. For example, the output for your last docker push
shows that some of the layers of your image already exists on the Docker Hub.
$ docker push $DOCKERHUB_USERNAME/python-hello-world
To look more closely at layers, you can use the docker image history
command of the python image we created.
$ docker image history python-hello-world
Each line represents a layer of the image. You'll notice that the top lines match to your Dockerfile that you created, and the lines below are pulled from the parent python image. Don't worry about the "<missing>" tags. These are still normal layers; they have just not been given an ID by the docker system.
Clean up
Completing this lab results in a bunch of running containers on your host. Let's clean these up.
Run docker container stop [container id]
for each container that is running
First get a list of the containers running using docker container ls
.
$ docker container ls
Then run the command for each container in the list.
$ docker container stop <container_id>
Remove the stopped containers
docker system prune
is a really handy command to clean up your system. It will remove any stopped containers, unused volumes and networks, and dangling images.
$ docker system prune
WARNING! This will remove:
- all stopped containers
- all volumes not used by at least one container
- all networks not used by at least one container
- all dangling images
Are you sure you want to continue? [y/N] y
Deleted Containers:
0b2ba61df37fb4038d9ae5d145740c63c2c211ae2729fc27dc01b82b5aaafa26
Total reclaimed space: 300.3kB
Summary
In this lab, you started adding value by creating your own custom docker containers.
Key Takeaways:
- The Dockerfile is how you create reproducible builds for your application and how you integrate your application with Docker into the CI/CD pipeline
- Docker images can be made available to all of your environments through a central registry. The Docker Hub is one example of a registry, but you can deploy your own registry on servers you control.
- Docker images contain all the dependencies that it needs to run an application within the image. This is useful because we no longer have deal with environment drift (version differences) when we rely on dependencies that are install on every environment we deploy to.
- Docker makes use of the union file system and "copy on write" to reuse layers of images. This lowers the footprint of storing images and significantly increases the performance of starting containers.
- Image layers are cached by the Docker build and push system. No need to rebuild or repush image layers that are already present on the desired system.
- Each line in a Dockerfile creates a new layer, and because of the layer cache, the lines that change more frequently (e.g. adding source code to an image) should be listed near the bottom of the file.
π Practice Now: Adding Value with Custom Docker Images
Want to Learn More?
- π³ Learn the latest Docker Skill Trees
- π Read More Docker Tutorials
- π¬ Join our Discord or tweet us @WeAreLabEx
Top comments (0)