DEV Community

Discussion on: Docker Demystified

Collapse
 
december1981 profile image
Stephen Brown

Thank you, a very clear response.

I recently had a problem the other way round (not quite symmetrical to this), where I needed to connect container services to a service listening on the host - postfix, so that I could send out mail from within a container. The docker recommended solution was to manually specify the netmask and gateway (representing the docker "host") of a bridged network to which the container services would belong. With this I would have a subnet of predictable IPs for the container services, allowing postfix to be configured to allow IP addresses in this network in its client restriction rules. (Postfix had originally been set to listen loopback only, and I had to make it listen on all interfaces, so that docker container services on this subnet could connect to it as something listening on the gateway interface... hence having to tighten security over connect able clients IPs)

Thread Thread
 
december1981 profile image
Stephen Brown

Then to add to confusion, there's docker swarm. I'm not even sure how networking works there under the hood. For instance, multiple instances of the same container shouldn't even expose their ports much less publish them, but HTTP proxy services like Traefik have a way of connecting with them through some kind of virtual port scheme? I have no idea...

Thread Thread
 
frosnerd profile image
Frank Rosner

Yeah some of the cluster managers have their own networking, like Kubernetes, e.g. Anyway if you only have one machine, no need to use a cluster manager.

In your case, couldn't you run postfix in another container if you need to access it and configure a user defined network so that the two containers can communicate?

Thread Thread
 
december1981 profile image
Stephen Brown

That is a better solution generally... indeed I considered it after I had set it up this way. It could be in a docker external "mail" network for the other containers, and also publish its port for services in the host proper. But I decided the mail server was so important (for stability, availability, etc) that I'd rather have it running as a plain service on the host.

Thread Thread
 
frosnerd profile image
Frank Rosner

Just being the devil's advocate here: Why is the availability and stability increased when it's running on the host, exactly?

Thread Thread
 
december1981 profile image
Stephen Brown

I guess it boils down to how stable the version of Docker is that you are running, or how adjusted it is with your host environment. If Docker falls over (and I have had this happen before), it will take all containers with it.

The reason for the Docker service going down may not be Docker's fault - whatever the case, you increase the number of factors that might make a critical service unavailable by deploying it inside a container, and I didn't want that with the mail server.

Thread Thread
 
frosnerd profile image
Frank Rosner

I agree. On the other hand if Docker goes down, nobody will be using postfix because all containers are down anyways... :P

Btw there is also the live-restore feature that enables you to keep containers running during a downtime of the daemon: docs.docker.com/config/containers/...

Thread Thread
 
december1981 profile image
Stephen Brown

I also had the intention of allowing the mail server to accept incoming mail at some point ... and the thought process was I'd rather never have mails bounce, as it's a real downer, especially with potential clients at stake, etc. Mind you, setting up a mail server properly to handle incoming mail is a pain, so I'll probably just delegate that to another mail service provider in the end.

Anyway, thanks for the live-restore feature - I hadn't heard of that until now!