DEV Community

Cover image for Docker is not a packaging tool — intro
Jesse Portnoy
Jesse Portnoy

Posted on • Edited on

Docker is not a packaging tool — intro

This series of posts will take you all the way from chrooted ENVs to Docker containers and attempt to explain why, while Docker is a great tool, proper packaging of software remains as relevant as ever.

If you’re reading this, you probably already heard of Docker and likely also used it; if not for your own projects then to deploy others’. And so, you may think there’s nothing I could tell you about it that will surprise you.. Let’s find out, shall we? Give it a go, I will try to make it amusing, too:)

If you go to https://en.wikipedia.org/wiki/Docker_(software), the first paragraph you’ll encounter is: Docker is a set of platform as a service (PaaS) products that use OS-level virtualization to deliver software in packages called containers.”

Packages called containers?! Eh, let’s take a step back from tech terms and discuss English; Container , as defined by Oxford: “an object for holding or transporting something.”

Yeah, matches my understanding of the word, wouldn’t you agree?

Okay, now the same for Package: “an object or group of objects wrapped in paper or packed in a box.”

Again, fair enough, right?

I hate metaphors but I do like analogies and this one is pretty good so, try this out for size: there’s a container, filled with packages, it sits on the docks in the harbour. At some point, the crew will go in and unpack these packages and, after some processing, they’ll eventually arrive at their different destinations.

Now, is a container the same thing as a package? That’s right, it isn’t. Definitely not in the physical world but as I’ll attempt to explain — not in software, either.

Before we do that, though, let’s give Wikipedia another chance (it deserves it) and see if we can find some interesting paragraphs about Docker that I don’t dispute…

“Containers are isolated from one another and bundle their own software, libraries and configuration files; they can communicate with each other through well-defined channels. Because all of the containers share the services of a single operating system kernel, they use fewer resources than virtual machines.”

See? told you they deserve a chance. This is a good, succinct description of some of the benefits of containers. They indeed have a smaller footprint than VMs and that’s a big plus but I’d like to focus on the isolation bit for a moment. Isolation provides two main advantages:

  • Security
  • [Relative] Decoupling

Before we go on to explain how Docker helps us with the above, I feel it would be nice to honour what I personally consider its first predecessor — The Chroot.

Docker launched in 2011 and started gaining traction circa 2013 but, long before that, a simpler, widely used UNIX tool gave us the benefits of isolated (often referred to as jailed) ENVs.

So, what’s this chroot thing, then?

Let’s give Wikipedia some well deserved rest and go to another beloved resource — the man pages.

$ man -k chroot
chroot (2) - change root directory
chroot (8) - run command or interactive shell with special root director
Enter fullscreen mode Exit fullscreen mode

In case you’re not very familiar with man pages, they consist of different sections and since I think this is something that’s useful to know, I’ll list the different sections below:

    MANUAL SECTIONS
    The standard sections of the manual include:

    1 User Commands
    2 System Calls
    3 C Library Functions
    4 Devices and Special Files
    5 File Formats and Conventions
    6 Games et. al.
    7 Miscellanea
    8 System Administration tools and Daemo
Enter fullscreen mode Exit fullscreen mode

Okay, so from the above, we can already gather that, like many other UNIX concepts, chrootis both a syscall (section 2) and a command/tool (section 8).

Let’s see what man 2 chroot has to tell us:

chroot() changes the root directory of the calling process to that specified in path. This directory will be used for pathnames beginning with /.

The root directory is inherited by all children of the calling process.

Only a privileged process (Linux: one with the CAP_SYS_CHROOT capability in its user namespace) may call chroot().

Interesting, right? Now man 8 chroot:

chroot — run command or interactive shell with special root directory

Right, so, unsurprisingly, the command chroot calls the syscall chroot...

Let’s go back to Wikipedia for a brief history lesson — trust me, these titbits make for great conversation started, be it in job interviews, work lunches and dates (well, okay, maybe less so on dates but it really does depend on whom you date, doesn’t it?):

The chroot system call was introduced during development of Version 7 Unix in 1979. One source suggests that Bill Joy added it on 18 March 1982–17 months before 4.2BSD was released — in order to test its installation and build system.

So.. this was 1982. Good vintage, one hopes, I was born nearly three months later, Docker was born 29 years later but I’d like to take us near the present and regale you with some stories of how I had used chrooted ENVs back in 2006–2010.

At the time (I was much younger and looked like a young brad Pitt — well, no, I absolutely didn’t but I bet you laughed at that one) I worked for a company that the PHP veterans amongst you will undoubtedly have heard of called Zend and held the best title I ever had by far — Build Master.

Remember, this is 2006. SaaS was already a concept but it was not as widespread as it is today and far more companies delivered software that was meant to be deployed on the customer’s machines (in 20 years’ time, some youngster will find this a novel idea, I’m sure).

Zend was one such company and the software it delivered (amongst other product lines) was a PHP application server (think JBOSS but for PHP, basically). First it was called Zend Platform, it then had a multitude of internal codenames that drove me bunkers and ultimately, it was rebranded as Zend Server.

So, what did the build master do? Well, since the product was to be deployed on customer machines and seeing how Zend initially wanted to support anything that has uname deployed on it and the product consisted of many different components written in C/C++, which in turn, depended on third party FOSS components (also written in C/C++), as well as a PHP admin UI [pause for breath] someone had to build and package all these things. I, along with two excellent mates of mine, was one of these people.

In the next instalment of this series, I’ll tell you about the challenge of delivering software to multiple UNIX and Linux distributions (same same but different) and how chroot was leveraged towards this objective.

We’ll then get back to Docker and why it is the next evolutionary step and, finally, explain why Docker is only part of the dream deployment process and even note some cases where it’s not that helpful.

Stay tuned and, if you like this sort of content (but only if you do — don’t feel obligated), please give the clapper a go and follow me.

Happy building.

Top comments (0)