Containers, containers everywhere!
Modern software development is going crazy about containers.
They have definitely changed the way we ship software and as an up-to-date developer it is inevitable to hear about, read about or work with container technology.
When I played around with containers for the first time, an immediate thought came to my mind:
Containers would be perfect to use on my server!
Running each component on my server in its own isolated environment, including configs and data, would keep the actual server clean and its components portable. Build your configured Docker image once and it’s ready to run on any server (running Docker).
After my brother and a friend of mine convinced me that I totally needed to run my own XMPP server, I decided that this would be the perfect thing to put in a container.
Containering my XMPP server taught me a lot about the practical use of Docker, so if you’re interested, keep on reading!
Starting out
Following the KISS principle, I decided to start my container FROM an Alpine Linux base.
FROM alpine:latest
Using Alpine Linux as a base image would allow me to build my image with very little overhead. On the other hand, I’d have to install most of the dependencies myself.
The FROM statement specifies the base image of a new image, so every Dockerfile has to start with it.
Collecting dependencies
Digging through the documentation of my XMPP server of choice (Prosody), I ended up with a list of packages to install.
RUN apk --no-cache add --update bash \
ca-certificates \
lua-expat \
lua-filesystem \
lua-sec \
lua-socket \
mercurial \
python \
prosody
Running this command creates a new layer in our Docker image, and so does every command afterwards. A Docker container is assembled in layers, just like a lasagne.
Every layer holds the differences to the layer below, so our new layer only contains the new files installed via package management. When building a Docker image we’re basically just stacking up read-only layers, and Docker later adds a writable layer on top to store runtime data.
Adding external files
Prosody requires a config file and since I dont want to do configuration every time the container is started, I’ll provide it as a static file at build time. Docker provides two ways to add external files to an image: ADD and COPY.
Now ADD and COPY basically do the exact same thing: Copy files and / or directories from a given source to a destination inside the container.
In contrast to COPY , ADD provides some additional functionality (just a very short summary to highlight the difference):
- $source is a local archive: The archive will be decompressed and its content will be copied.
- $source is a URL: The file will be downloaded and copied.
The COPY syntax looks like the following:
COPY /path/to/config/on/host /path/to/config/in/container
Building the image
After setting up the actual server configuration, which in case of Prosody consists of a single file, it’s time to build an image.
docker image build -t prosody_xmpp .
This will build a Docker image named prosody_xmpp based on the Dockerfile inside the current directory.
Running
docker image list
will print a list of available images.
Running an image
New containers are started by running them:
docker run $image
Images are specified by either a name, or their generated image id, whereby it’s not neccessary to provide a full id. Docker will run the image wich is uniquely identified by the first few given literals of an id.
Mounting External files
In order to enable TLS, Prosody requires certificates.
But even though it’s a simple solution, I’d consider it bad practice to ADD any private certificate data to a Docker image. Instead, sensitive data like private keys can be passed to a container at runtime by bind mounting files from the host system.
docker run --mount source=/source/on/host,target=/destination/in/container,readonly $image
By default files and folders are mounted in RW mode, adding the readonly flag will make mounts read-only.
Persistent storage
As mentioned earlier, Docker adds an additional, writable layer on top on an image to hold runtime data.
When a container is stopped, this RW layer is discarded, so any runtime data is lost. In case of my XMPP server this means that any data like chat history, received files etc. is lost as soon as the container is stopped.
With volumes, Docker provides a way to persist data over several runs of a container.
Creating a new volume is easy:
docker volume create $volume_name
Similar to images, volumes are listed by executing:
docker volume ls
Volumes are specified when running a container, similar to bind mounts.
docker run --mount source=$volume_name,target=/location/in/container $image
Running the server on startup
The CMD command is used in Dockerfiles to provide a default executable to run on startup. Docker allows only one CMD statement, in case of multiple statements, onlye the last one is executed.
One way to start the XMPP servre using CMD could be
CMD prosodyctl start
While starting the server using CMD is totally fine, there’s an alternative way of running programs. The way the XMPP image has been configured so far, the container will run as an executable, so by running the container our actual intention is to start a program, which happens to run in a container.
Dockers way of running containers as executables are entrypoints.
Environment variables
The first iteration of my XMPP image did not support customization and persistent storage. So every time I started the server I had to attach to the container and do the initial configuration via command line within the running container.
With persistent storage using volumes in place, one way of customizing the server on startup is by providing a script as entrypoint.
I don’t want to open up my server for everyone, so the intention of the entrypoint script is to optionally register a new user and afterwards disable registration via config.
Registration requires three things:
- Username
- Domain
- Password
Docker allows to pass these values to the container as environment variables on startup. These values are then read by the startup script and when present, a new user is registered.
docker run -e VAR1=value1 -e VAR2=value2 -e BOOLEAN $image
Finishing up
The last requirement for the XMPP container to run are ports.
Docker containers provide an isolated network environment, so by default, servers running inside a container are not reachable from outside the container and vice versa.
In order to make the server available it is required to expose ports.
docker run -p 5222:5222 $image
This will bind port 5222 of the container to port 5222 on the host machine. Specifying all the ports used by Prosody finishes up the containered server setup.
Conclusion
Working with Docker on a server seems quite useful to me. I learned a lot and enjoyed the process of building a working setup, so I’ll hopefully be able to put all my running services into containers.
I hope you enjoyed my little story and may consider containers for yourself.
So long
Simon
P.S. You can now reach out to me via xmpp@simon-hofmann.org ;)
Top comments (0)