DEV Community

Derick Bailey
Derick Bailey

Posted on • Originally published at derickbailey.com

10 Myths About Docker That Stop Developers Cold

I was discussing the growth of Docker and I kept hearing bits of information that didn't quite seem right in my mind.

“Docker is just inherently more enterprise”

“it's only tentatively working on OSx, barely on Windows”

“I'm not confident I can get it running locally without a bunch of hassle”

… and more

There are tiny bits of truth in these statements (see #3 and #5, below, for example), but tiny bits of truth often make it easy to overlook what isn't true, or is no longer true.

And with articles that do nothing more than toss around jargon, require inordinate numbers of frameworks, and discuss how to manage 10thousand-billion requests per second with only 30thousand containers, automating 5thousand microservices hosted in 6hundred cloud based server instances…

Well, it's easy to see why Docker has a grand mythology surrounding it.

It's unfortunate that the myths and misinformation persist, though. They rarely do more than stop developers from trying Docker.

So, let's look at the most common myths – some that I've seen, and some I've previously believed – and try to find the truth in them, as well as solutions if there are any to be found.

Myth #10: I can't develop with Docker…

because I can't edit the Dockerfile

As a developer, I have specific needs for tools and environment configuration, when working. I've also been told (rightfully so) that I can't edit the production Dockerfile to add the things I need.

The production Docker image should be configured for production purposes, only.

So, how do I handle my development needs, with Docker? If I can't edit the Dockerfile to add my tools and configuration, how am I supposed to develop apps in Docker, at all?

I could copy & paste the production Dockerfile into my own, and then modify that file for my needs. But, we all know that duplication is the root of all evil. And, we all know that duplication is the root of all evil. Because duplication is the root of all evil.

The Solution

Rather than duplicating the Dockerfile and potentially causing more problems, a better solution is to use the Docker model of building images from images.

I'm already building my production application image from a base like “node:6”. So, why not create a “dev.dockerfile and have it build from my application's production image as its base?

Dockerfile

FROM node:6

# ... production configuration
Enter fullscreen mode Exit fullscreen mode

Production Build

$ docker build -t myapp .
Enter fullscreen mode Exit fullscreen mode

dev.dockerfile

FROM myapp

# ... development configuration
Enter fullscreen mode Exit fullscreen mode

Development Build

$ docker build -t myapp:dev -f dev.dockerfile .
Enter fullscreen mode Exit fullscreen mode

Now I can modify the dev.dockerfile to suit my development needs, knowing that it will use the exact configuration from the production image.

Want to See a Dev Image in Action?

Check out the WatchMeCode episode on Creating a Development Container – part of the Guide to Building Node.js Apps in Docker.

Myth #9: I can't see anything in this container

because I can't see into my container, at all!

Docker is application virtualization (containerization), not a full virtual machine to be used for general computing purposes.

But a developer often needs to treat a container as if it were a virtual machine.

I need to get logs (beyond the simple console output of my app), examine debug output, and ensure all of my needs are being met by the file and system changes I've put into the container.

If a container isn't a virtual machine, though, how do I know what's going on? How do I see the files, the environment variables, and the other bits that I need, inside the container?

The Solution

While a Docker container may not technically be a full virtual machine, it does run a Linux distribution under the hood.

Yes, this distribution may be a slimmed down, minimal distribution such as Alpine Linux, but it will still have basic shell access among other things. And having a Linux distribution as the base of a container gives me options for diving into the container.

There are two basic methods of doing this, depending on the circumstances.

Method 1: Shell Into A Running Container

If I have a container up and running already, I can use the “docker exec command to enter that container, with full shell access.

$ docker exec -it mycontainer /bin/sh
Enter fullscreen mode Exit fullscreen mode

Once I've done this, I'll be inside the container as if I were shelled into any Linux distribution.

Method 2: Run A Shell as the Container's Command

If I don't have a container up and running – and can't get one running – I can run a new container from an image, with the Linux shell as the command to start.

$ docker run -it myapp /bin/sh
Enter fullscreen mode Exit fullscreen mode

Now I have a new container that runs with a shell, allowing me to look around, easily.

Want to See a Shell in Action?

Check out these two episodes from WatchMeCode's Guide to Learning Docker and Guide to Building Node.js Apps in Docker.

Myth #8: I have to code inside the Docker container?

and I can't use my favorite editor?!

When I first looked at a Docker container, running my Node.js code, I was excited about the possibilities.

But that excitement quickly diminished as I wondered how I was supposed to move edited code into the container, after building an image.

Was I supposed to re-build the image every time? That would be painfully slow… and not really an option.

Ok, should I shell into the container to edit the code with vim?

That works.

But, if I wanted to use a better IDE / editor, I wouldn't be able to. I'd have to use something like vim all the time (and not my preferred version of vim).

If I only have command-line / shell access to my container, how can I use my favorite editor?

The Solution

Docker allows me to mount a folder from my host system into a target container, using the “volume mount options.

$ docker run -v /dev/my-app:/var/app myapp
Enter fullscreen mode Exit fullscreen mode

With this, the container's “/var/app folder will point to the local “/dev/my-app folder. Editing code in “/dev/my-app – with my favorite editor, of course – will change the code that the container sees and uses.

Want to See Editing in a Mounted Volume, in Action?

Check out the WatchMeCode episode on editing code in a container – part of the Guide to Building Node.js Apps in Docker.

Myth #7: I have to use a command-line debugger…

and I vastly prefer my IDE's debugger

With the ability to edit code and have it reflected in a container, plus the ability to shell into a container, debugging code is only a step away.

I only need to run the debugger in the container, after editing the code in question, right?

While this is certainly true – I can use the command-line debugger of my programming language from inside a Docker container – it is not the only option.

How is it possible, then, to use the debugger from my favorite IDE / editor, with code in a container?

The Solution

The short answer is “remote debugging”.

The long answer, however, is very dependent on which language and runtime is used for development.

With Node.js, for example, I can do remote debugging over a TCP/IP port (5858). To debug through a Docker container, then, I only need to expose that port from my Docker image (the “dev.dockerfile image, of course).

# ...

EXPOSE 5858

# ...
Enter fullscreen mode Exit fullscreen mode

With this port exposed, I can shell into the container and use any of the typical methods of starting the Node.js debugging service before attaching my favorite debugger.

Want to See Visual Studio Code Debug a Node.js Container?

Check out the WatchMeCode episode on debugging in a container with Visual Studio Code – part of the Guide to Building Node.js apps in Docker.

Myth #6: I have to “docker run every time

and I can't remember all those “docker run options…

There is no question that Docker has an enormous number of command-line options. Looking through the Docker help pages can be like reading an ancient tome of mythology from an extinct civilization.

When it comes time to “run a container, then, it's no surprise that I'm often confused or downright frustrated, never getting the options right the first time.

What's more, every call to “docker run creates a new container instance from an image.

If I need a new container, this is great.

If, however, I want to run a container that I had previously created, I'm not going to like the result of “docker run”… which is yet another new container instance.

The Solution

I don't need to “docker run a new container every time I need one.

Instead, I can “stop and “start the container in question.

Doing this allows my container to be stopped and started, as expected.

This also persists the state of the container between runs, meaning I will be able to restart a container where it left off. If I've modified any files in the container, those changes will be intact when the container is started again.

Want to See Start and Stop in Action?

There are many episodes of WatchMeCode's Guide to Learning Docker and Guide to Building Node.js Apps in Docker that use this technique.

If you're new to the idea, however, I recommend watching the episode on basic image and container management, which covers stopping and re-starting a single container instance.

Myth #5: Docker hardly works on macOS and Windows

and I use a Mac / Windows

Until a few months ago, this was largely true.

In the past, Docker on Mac and Windows required the use of a full virtual machine with a “docker-machine utility and a layer of additional software proxying the work into / out of the vm.

It worked… but it introduced a tremendous amount of overhead while limiting (or excluding) certain features.

The Solution

Fortunately, Docker understands the need to support more than just Linux for a host operating system.

In the second half of 2016, Docker released the official Docker for Mac and Docker for Windows software packages.

This made it incredibly simple to install and use Docker on both of these operating systems. With regular updates, the features and functionality are nearly at parity with the Linux variant, as well. There's hardly a difference anymore, and I can't remember the last time I needed an option or feature that was not available in these versions.

Want to Install Docker for Mac or Windows?

WatchMeCode has free installation episodes for both (as well as Ubuntu Linux!)

Myth #4: Docker is command-line only

and I am significantly more efficient with visual tools

With it's birthplace in Linux, it's no surprise that Docker prefers command-line tooling.

The abundance of commands and options, however, can be overwhelming. And for a developer that does not spend a regular amount of time in a console / terminal window, this can be a source of frustration and lost productivity.

The Solution

As the community around Docker grows, there are more and more tools that fit the preferences of more and more developers – including visual tools.

Docker for Mac and Windows include basic integration with Kitematic, for example – a GUI for managing Docker images and containers, on my machine.

With Kitematic, it's easy to search for images in Docker repositories, create containers and manage the various options of my installed and running containers.

Want to See Kitematic in Action?

Check out the Kitematic episode in WatchMeCode's Guide to Learning Docker.

Myth #3: I can't run my database in a container.

It won't scale properly… and I'll lose my data!

Containers are meant to be ephemeral – they should be destroyed and re-created as needed, without a moment's hesitation. But if I'm storing data from a database in my container, deleting the container will delete my data.

Furthermore, database systems have very specific methods in which they can scale – both up (larger server) and out (more servers).

Docker, it seems, is specialized in scaling out – creating more instances of things, when more processing power is required. While most database systems, on the other hand, require specific and specialized configuration and maintenance to scale out.

So… yes… it's true. It's not a good idea to run a production database in a Docker container.

However, my first real success with Docker was with a database.

Oracle, to be specific.

I had tried and failed to install Oracle into a virtual machine, for my development needs. I spent nearly 2 weeks (off and on) working on it, and never even came close.

Within 30 minutes of learning that there is an Oracle XE image for Docker, however, I had Oracle up and running and working.

In my development environment.

The Solution

Docker may not be great for running a database in a production environment, but it works wonders for development.

I've been running MongoDB, MySQL, Oracle, Redis and other data / persistence systems for quite some time now, and I couldn't be happier about it.

And, when it comes to the “ephemeral nature of a Docker container? Volume mounts.

Like the code editing myth, a volume mount provides a convenient way of storing data on my local system and using it in a container.

Now I can destroy a container and re-create it, as needed, knowing I'll pick up right where I left off.

Myth #2: I can't use Docker on my project

because Docker is all-or-nothing

When I first looked at Docker, I thought this was true – you either develop, debug, deploy and “devops everything with Docker (and two-hundreds extra tools and frameworks, to make it all work automagically), or you don't Docker at all.

My experience with installing and running a database, as my first success with Docker, showed me otherwise.

Any tool or technology that demands all-or-nothing should be re-evaluated with an extreme microscope. It's rare (beyond rare) that this is true. And when it is, it may not be something into which time and money should be invested.

The Solution

Docker, like most development tools, can be added piece by piece.

Start small.

Run a development database in a container.

Then build a single library inside a docker container and learn how it works.

Build the next microservice – the one that only needs a few lines of code – in a container, after that.

Move on to a larger project with multiple team members actively developing within it, from there.

There is no need to go all-or-nothing.

Myth #1: I won't benefit from Docker… At all…

because Docker is “enterprise”, and “devops”

This was the single largest mental hurdle I had to remove, when I first looked at Docker.

Docker, in my mind, was this grand thing that only the most advanced of teams with scalability concerns that I would never see, had to deal with.

It's no surprise that I thought this way, either.

When I look around at all the buzz and hype in the blog world and conference talks, I see nothing but “How Big-Name-Company Automated 10,000,000 Microservices with Docker, Kubernetes, and Shiny-New-Netflix-Scale-Toolset”.

Docker may excel at “enterprise and “devops”, but the average, everyday developer – like you and I – can take advantage of what Docker has to offer.

The Solution

Give docker a try.

Again, start small.

I run a single virtual machine with 12GB of RAM, to host 3 web projects for a single client. It's a meager server, to say the least. But I'm looking at Docker – just plain old Docker, by itself – as a way to more effectively use that server.

I have a second client – with a total of 5 part time developers (covering a total of less than 1 full time person worth of hours, every week) that is already using Docker to automate their build and deployment process.

I build most of my open source libraries for Node.js apps, with Docker, at this point.

I am finding new and better ways to manage the software and services that I need to install on my laptop, using Docker, every day.

And remember …

Don't Buy The Hype or Believe The Myths

The mythology around Docker exists for good reason.

It has, historically, been difficult to play with outside of Linux. And it is, to this day and moving forward, a tremendous benefit to enterprise and devops work.

But the mythology, unfortunately, does little to help the developer that could benefit the most: You.

If you find yourself looking at this list of myths, truths and solutions, still saying, “Yeah, but …”, I ask you to take some time and re-evaluate what you think about Docker, and why.

If you still have questions or concerns about how a development environment can take advantage of Docker, get in touch. I'd love to hear your questions and see if there's anything I can do to help.

And if you want to learn the basics of Docker or how to develop apps within it, but don't know where to start, check out WatchMeCode's Guide to Learning Docker (from the ground up) and the Guide to Building Node.js Apps in Docker.

Top comments (11)

Collapse
 
ekdikeo profile image
Eric B

I would like to call out some problems that I currently have with Docker for Windows, which I hope will be fixed in the future, because they truly make it unusable for a lot of situations at the current time.

Docker can mess up Windows networking. Something about the Virtual Switch that it installs in Hyper-V can completely trash your networking implementation on the machine that is running it, preventing any services that run on the bare metal from working -- it will automatically forward all incoming connections to the Virtual Machine that runs all the dockers. This means that neither the host machine, nor any of the docker containers, can work correctly.

So far, the only solution that I've found to this, is to load up the Hyper-V manager, after all my containers are up and running, and set the Virtual Switch to run on the External network. This gets all the networking running correctly. However, doing so, also prevents me from starting up any new containers that have volumes mounted, because it needs drive sharing to work.. and the drive sharing doesn't work on localhost with the virtual switch set to External network. SO, if I need to restart or run any new containers, I have to switch the Hyper-V switch back to Internal network, restart or run the new containers, and then once they are started switch it back to External network.

Docker for Windows can be a serious pain in the ass, because of that. This does not happen on all my machines, but it does happen on the two machines that I actually need to use Docker on personally. This is a really nasty problem, and so far, I've been unable to find any specific solutions.

I can configure a new Docker machine, and use that as the default, which uses an External switch network by default. However, that still doesn't solve the problem that the Internal switch is re-created every time Docker starts, which breaks all the networking on device.

Collapse
 
aghost7 profile image
Jonathan Boudreau

Personally I found that Docker for Mac is completely unusable if you're trying to mount (using volumes) your code into the container (git status was taking 4s~). I instead opted to use Vagrant with an NFS file share where I run docker from. At home I only use Linux though; only ran into issues at work.

I currently develop entirely from containers and use it to share my Neovim and tmux setup across all my machines.

Collapse
 
poison_dv profile image
David Virtser

Nice article!

You can add Portainer as another cross platform alternative for Docker management UI.

Also using of docker compose when you have more than one service make environment specific adjustments easier without creating specific Dockerfile for each environment.

Collapse
 
bgannin01 profile image
Brian Ganninger

Myth 5 corollary: while Docker (core, UI) is supported on all host systems currently it's only able to run Linux and Windows containers, Mac is still off the table 😢 It may end up that Linux Swift against the standard library is the only viable option for macOS code building in Docker.

Collapse
 
wingliu0 profile image
wingliu

Myth #11: I want to use docker, but docker hub is expensive..
any free alternative to docker hub?

Collapse
 
_nicovillanueva profile image
Nico

I assume you want to host your private code, because Dockerhub is actually free (for public images, of course)
If you have a VM somewhere (say, AWS) you can run your own Registry (Dockerhub is essentially a Registry): docs.docker.com/registry/deploying/
If you are attempting to use it for production, I'd suggest implementing SSL with Let's Encrypt from the get-go. Also, for management, I still have to try out Portus: port.us.org/ (ACL, namespaces, storage management, etc.) It seems pretty cool, for a free alternative to Docker's enterprise offering.

Collapse
 
codejanovic profile image
Codejanovic

how about gitlab or amazon ECR?

Collapse
 
sutikshna profile image
Anand

Thanks you! Best docker resource I come across!

Collapse
 
kar358 profile image
kar358

I'm starting docker for Windows in August of 2020 and it's STILL a pain in the ass. Even when copy-pasting from up-to-date tutorials, every other command gives me some special new error to google.

Collapse
 
veliki_zli_krax profile image
Краљ Шуме

Daily blue screen of death DRIVER_CORRUPTED_EXPOOL with Docker on Win10. I don't even care, it's just a crap I have to use.

Collapse
 
ekdikeo profile image
Eric B

... that sounds like you actually have some hardware that has a crappy driver.