DEV Community

Cover image for Lessons Learned with Docker
Alex Hanson
Alex Hanson

Posted on

Lessons Learned with Docker

I've been working with Docker on a semi-regular basis for a couple months now. I am the technical lead for a product that not only has the usual dev and test, but multiple user-facing deployments. These deployments may have 5 or 50 users (or more), and may be accessed over the internet by external users, or available​ only on a company intranet with no outside connectivity. Docker seemed to be a good way to streamline the deployments - anywhere Docker can be installed, our software stack can be installed as well.

Some lessons I've learned along the way:

Always Map Volumes

This was a hard lesson. I spun up and down a MySQL container for local testing, and it so happened the last one I stood up I forgot to map the data directory back to one on the host. I wasn't paying attention, blew away the container, and lost a good bit of local data. Obviously because it was my local environment I could get it back, but I lost an hour or two worth of work getting back to where I started.

Obviously there's the benefit of being able to share volumes across containers, but it's good to always have the reminder that containers are intended to be ephemeral.

Script Everything

My first couple of containers were pretty straightforward - one portion of the application is deployed on Apache Tomcat, so I started with that container as a base and built a simple Dockerfile to basically drop in the WAR and move on. I then made the mistake of starting to commit to my container instead of working to update the Dockerfile. I had in my head what needed to be done to get a working container, but as we transitioned the team to using Docker for their local environments, it quickly became unsustainable. The lesson: always keep the Dockerfile current, and make sure to rebuild from that Dockerfile rather than committing the container. It makes for a repeatable process to building a container, and they're by default version-controlled so it's easy to see how the project has evolved over time.

That goes for utility containers, such as MySQL or Memcached that might have a change or two, but largely stay the same as what was pulled down from the Docker registry. Even put those minor updates in a Dockerfile somewhere in a Git repo to monitor changes.

We recently started using Nexus as an internal registry. It seems to work well and saves the step of having to build the containers locally; their Docker registry feature is relatively new though so it'll be interesting to see how it evolves over the next couple releases. So far, the largest benefit is having the registry always have the current dev and test builds from Jenkins so developers can easily keep their local environments current, and some bash scripts make it easy to grab the latest containers and pick which environment gets deployed.

On a related note - docker-compose is a life saver. We have 8 containers for our web application, and leveraging compose is required. Again, the docker-compose file is version-controlled and with code reviews on all pull requests, there are no surprises with deployment updates.

That leads me to what I found to be a large headache with Docker - shared properties.

Shared properties

In many cases, when we deploy, the system talks to an external database, and all of the containers share the same configuration; well, at least they share the host and port (you don't share usernames and passwords for different components even on the same database, do you???). To make our lives easier, we added a Ruby script to wrap docker-compose, which actually builds a new docker-compose.yml that takes the shared properties and copies then to each of the containers' property lists. We have a base set of properties, some container-specific properties that extend/overwrite those as needed, and they're combined with the base docker-compose file to generate a new file that is the actual compose file we read in.

While it's a little weird to run the wrapper script instead of docker-compose directly, the benefits of not having to repeat configuration in multiple places make the extra step worth it.

Docker is great; I've had a lot of fun simplifying deployments by containerizing, and I look forward to exploring how we can use Swarm to make our deployments more robust and fault-tolerant.

Top comments (7)

Collapse
 
andrewdtanner profile image
Andrew Tanner 🇪🇺

I'm a Vagrant user and have toyed with the idea of Docker but very much feel like it might be a sledgehammer to a nail for personal projects. Vagrant is quite heavy and I did cause myself a headache by not using an LTS Ubuntu distro so had to rebuild my box.

Question is, is there much benefit to using Docker for very small scale, personal projects? It seems better suited to scale. I'd be happy to learn just so I can learn something new but that might be as far as it goes.

Collapse
 
gegere profile image
#JASON

There is definitely a benefit for using Docker over Vagrant. Vagrant is a resource hog and takes a long time to deploy as it is near a full distro.

Docker is great for personal projects, small dev projects, and large scale projects.

I'm not sure what coding language you are using but I have a robust docker container repo for review here: github.com/htmlgraphic/Apache

In just a few commands you can have a duplicate setup with very small resource footprint when compared to Vagrant.

Collapse
 
aptituz profile image
Patrick Schönfeld

On top of that, I have to say that docker-compose even further increases the benefit of docker over vagrant, because it makes it very easy to share a whole environment that depends on other services (e.g. mysql, LDAP or whatever).

Collapse
 
rosshettel profile image
Ross Hettel

Instead of a ruby script wrapped around docker compose, wouldn't a docker compose environment file work the same? docs.docker.com/compose/environmen...

Collapse
 
ahansondev profile image
Alex Hanson

I can't remember specifically why, but I thought there was a reason the environment variables alone weren't enough for our environment. We do have separate environment files, and our developers can have a "local.yml" file that doesn't get checked in with local-specific configuration; that "local.yml" gets combined with the others to create the new compose file.

But, I think you're right, leveraging environment variables would probably accomplish the same goal. The separate script does provide the nice feedback of just saying "./application up" (for the name of the script) and the application starts, and makes it easy for non-members of the development team (who are often not familiar with Docker) to know what's going on.

Collapse
 
krichardson profile image
Kris Richardson

Docker compose also provides a way to extend a base compose file: docs.docker.com/compose/extends/

Collapse
 
arun4d profile image
Arun4D

You can try fabric8. This can solve lot of time. I have just tried for pre-dev environment.