How to actually make your life easier with Docker

jcs224 profile image Joe Sweeney ・3 min read

When Docker came out in 2013, the benefits being touted were pretty clear. "Full isolation from host machine and other apps", "perfectly-reproducible environments", and the "works on my machine" chant finally a thing of the past. Considering that most of my time troubleshooting issues was getting my code that works perfectly in development to run in production properly, Docker seemed like a great antidote.

However, what held me back from adopting it for another, like, 5 years, was that it was just too difficult to understand. I don't like working with tech stacks that I don't understand, because that's just another issue I'd have to troubleshoot down the road when things don't go totally perfectly. Despite having to do a little more work to configure the server where my code would ultimately live and run, and sometimes deal with some painful config issues, at least I had encountered many of those issues previously and knew how to fix them.

Today, Docker is so pervasive that it's actually become quite difficult to avoid. It seems like just about every project on Github has a Dockerfile and instructions in the readme on how to use it. Fortunately, it has, in my opinion, started to live up to its promise of actually making our lives as devs easier. I still haven't quite gone "all-in" with Docker, but there are areas where it has genuinely improved my happiness and productivity.

My sweet spot with Docker

There are certain stacks that I find a huge pain to set up again and again, when switching computers for example. These tend to be things like relational database (MySQL and Postgres) and any type of app that normally has a really crazy setup procedure (like self-hosted Gitlab).

How many times have you run into issues setting up MySQL on your computer? Especially on Linux? I always seem to struggle with that one. Adding the right package repository in Ubuntu, getting through the CLI installation unscathed, and booting it up. All of the different configuration possibilities allow a lot of things to get messed up on that initial install and bootup.

Or, I can just do this, and have a fully-working MySQL server ready to go!

docker run --name mysql-5 -p 3306:3306 -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:5

From there, I just use my favorite MySQL editor (Sequel Pro, HeidiSQL, etc.) and I can connect any service on my machine that needs MySQL, Docker or not, to use it as needed. And, I can boot up as many as I want (or as many as my hardware would allow) and just change the exposed port on those new instances (like 3307, 3308, etc.) so there isn't any conflict.

Also, I like to test emails on my local machine without relying on en external service to handle it. Instead, I can just throw in a quick SMTP trapping service called Mailhog that allows me to actually see the output of my locally-tested email!

docker run --name my-mailhog -p 1025:1025 -p 8025:8025 -d mailhog/mailhog

This command is even simpler. Just provide a name of your choosing to identify the container, the ports you want exposed (in this case, 1025 for the SMTP port and 8025 for the web portal to see the trapped emails), and the name of the image in Docker Hub (mailhog/mailhog). This command will boot up the server and allow you to see your trapped emails. It's incredibly easy.

So, I could use Docker with my main app codebase, but I just continue to use NodeJS without even worrying about Docker-izing it. Also, these examples aren't necessarily helpful for deploying to production (which Docker can certainly help with, with the right knowledge), they were just to help demonstrate setting up a nice development environment with less fuss.

Final thoughts

If you're finding it difficult to get into and understand Docker, or feeling left behind by the hype train, don't worry. You can make Docker work for you, without feeling like you need to ditch your VMs and anything not totally containerized. I have found a middle-ground that is working very well for me at the moment. As time goes on, I'm sure I'll adopt more and more pieces as it makes sense. I hope you can do the same as well.


Editor guide
kovah profile image
Kevin Woblick

Docker, or containerization, is one of these things that no one really cared about ten years ago. But now... damn I couldn't survive one dev day without it. All the hassles of working with different projects, simply gone. Let alone setting up different PHP versions correctly, which is a huge pain without having Docker containers.

thorstenhirsch profile image
Thorsten Hirsch

Nobody cared about containerisation ten years ago, because it didn't exist ten years ago. :-)

ossm1db profile image

LXC was released in 2008.

Thread Thread
thorstenhirsch profile image
Thorsten Hirsch

Yes, but in 2008 you couldn't do anything with it, only kernel developers were hacking with it. It was in 2014 when LXC 1.0 was finished and cgroups coming to the kernel that paved the way for Docker.

kovah profile image
Kevin Woblick

"10 years" was not meant to specify the exact date when Docker was released, but in a more broader manner for container-solutions.

Thread Thread
thorstenhirsch profile image
Thorsten Hirsch

There were no mentionable container-solutions before Docker. All we had was VMs. Well, there were jails in FreeBSD and zones in Solaris, and I guess z/OS also had a very enterprisey solution, but none of them were useful for developers.

zaffja profile image
Zafri Zulkipli

One of my favorite thing about docker is, there is so many images available. Let say I wanna spin up jenkins for a quick CI/CD, image is already there. Gitlab, it's there, and without much configuration as well!

joeyhub profile image
Joey Hernández

"Full isolation from host machine and other apps"

I found this isn't actually true. I had to develop on some software of awful quality. It would also communicate with many systems. It had never had a decent dev set up. End points all over, database, config, hard coded, etc.

This was also high stakes. This was used to configure systems for communications governments use to coordinate military operations.

At that time, I don't know about now, docker would let anything connect to anything. It wasn't safe to run locally without the network pulled.

I ended up building a system around docker and docker compose to do things such as run the processes but listen to network events and fully manage ip tables applying output rules.

It's dangerous in some circumstances to say fully isolated or contained because it's not true.

Technically you can say, have no network then sure, I'm technically wrong but the standard modes are more messed up. In is isolated, you have to map or enable in but out isn't. Then you want out isolated but a few things allowed out it wouldn't allow you.

When you want two way network, one direction is allow deny the other is just allow it bruv.

I don't know about how it is now but this is an important consideration for legacy software. You want it truly isolated, but to be able to log when it tries to connect out then be able to make holes if need be. If you get it set up like that safely it's a life safer for software you can't trust.

jcs224 profile image
Joe Sweeney Author

Yeah, full isolation might be a stretch, especially with outbound connections. Thanks for the input.

peterwitham profile image
Peter Witham

I have to agree, I am finding more and more that Docker is offering solutions to problems quicker than me setting everything up manually.

The next problem is to stop spending so long trying out so many interesting images :)

elevatika profile image
Mike CK

I realised the power of Docker when I started using Hasura. Whenever I needed to add any functionality to the server, all I needed was to add the relevant configuration details to the docker-compose.yml.

Docker allowed me to use a single DigitalOcean droplet to run Hasura and a Node.js authentication app side by side.

andyoverlord profile image
Andy Zhu

Thanks for sharing!