DEV Community

Kevin Woblick
Kevin Woblick

Posted on • Originally published at blog.kovah.de

Be careful with Docker ports!

Not too long ago I got an email from my hosting provider, actually a forwarding from the BSI (German Institute for Information Security) that stated my Elasticsearch installation was accessible from the outside. I was flushed, as I thought I had secured the Elasticsearch Docker container behind nginx, which had basic authentication enabled. Fact is: Docker had bound the 9200 port directly in the iptables configuration, ignoring the UFW firewall completely. Oops.

How Docker is handling ports

Well, not so oops. Thankfully, the Elasticsearch setup did not contain any sensible data as it was a playground for a Docker-based testing of Elasticsearch.

So, what had I do wrong so the app was exposed to the public?

The Docker setup as a security nightmare

I don't want to bother you with a lot of details here, but the specific part which introduced this security hole was in the port configuration itself. The corresponding docker-compose file looked like this, taken directly from the documentation:

version: '2.2'
services:
  es01:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.4.0
    container_name: es01
    # ...
    ports:
      - 9200:9200 # <<<
Enter fullscreen mode Exit fullscreen mode

At the first glance, this did not look like any issue at all. I knew that I had blocked all incoming ports except HTTP(S) with the firewall, which is UFW in my case. The problem is that with this configuration, Docker binds the 9200 port on the host machine to the 9200 port in the container. Again, I thought that this wouldn't be a problem, because I blocked all other ports anyway. Docker, however, does not respect UFW or maybe any other firewall at all, because it directly edits the iptables configuration. This way, connections for the used port are kinda bypassing the firewall and reach the container directly. Port 9200 was now open for all incoming connections.

Unfortunately, this fact is kinda buried in the Docker documentation, so I wasn't aware of this behavior. In the meantime, I created pull requests for the documentation of both the Docker CLI and Docker Compose with a small warning for the port configuration section. Just today I also submitted a pull request to the Elasticsearch docs itself with another hint.

The better way to configure Docker ports

Of course, I took down Elasticsearch the moment I got the email, only to find out about the port issue much later after searching for the issue on the internet. My initial intention was to run both Elasticsearch and Kibana behind a reverse proxy, nginx in my case. Nginx should have routed all requests from the outside to the app, while enabling basic authentication.

To do exactly this, bind the 9200 port to the host machine directly. It will not be exposed to the public and only the host can interact with that port. It may look like this in the docker-compose file:

version: '2.2'
services:
  es01:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.4.0
    container_name: es01
    # ...
    ports:
      - 127.0.0.1:9200:9200 # <<<
Enter fullscreen mode Exit fullscreen mode

I am pretty sure that huge security issues like this could be prevented by simple warnings in the documentation. If there is no warning, or a notice, how should people know about these specific behaviors? I hope the pull requests will be merged, so nobody else trips into this hole and maybe makes sensible company data public.


This article was first published on Blog.Kovah.de

Top comments (18)

Collapse
 
exadra37 profile image
Paulo Renato

I am pretty sure that huge security issues like this could be prevented by simple warnings in the documentation.

In first instance this is the traditional problem with vendors, they do insecure software by default, just for the sake of developer convenience, and then developers need to learn and remember to secure their things when they go into production.

The industry should adopt instead a mindset of secure by default, thus the developer would have to learn how to make their installation insecure, therefore when deploying to production they wouldn't have this type of issues.

Guess what, now they will not fix this insecure default because is backward incompatible.

After so many data breaches I don't understand why this mindset still prevails in our industry!!!

Collapse
 
rendlerdenis profile image
Denis Rendler

If you will allow me I would say this is a bit harsh generalistic view.

First of all, I am looking just now at the Docker docs which state the following: "otherwise, firewall rules will prevent all network traffic from reaching your container, as a default security posture". The link where you can find is: docs.docker.com/get-started/part2/

I admit I got bit by the same issue with the exact same ELK stack. But that is because I haven't, read top to bottom, how networking works with Docker.

Also, declaring that vendors "do insecure software by default, just for the sake of developer convenience" I think it is a bit narrow view. I've been working, on and off, with Linux since the 90's and the default security model hasn't changed at all since. Yet, the Linux ecosystem is still considered one of the most secure ones. Given this statement above, Linux firewalls come with default rule to ALLOW ALL traffic. Is this really for developer convenience? How would you be able to access a remote server if the default settings are DROP ALL traffic?
Or how about if the regular user you connect to in Linux would not have, by default, rights to even write in his own home folder?
I am just saying, there are as many different points of view as people involved, either developers or users, in a specific project.
Stop for a second and validate first the assumption that you might have just mist something before declaring that a project is insecure by default for the sake of convenience.

Collapse
 
exadra37 profile image
Paulo Renato

First of all thanks for replying to me, I love to exchange ideas, but sometimes I can be very passionated about them.

My affirmation was more in thinking on vendors that do software, like databases and other tooling, not specifically Operating Systems.

But addressing your points:

Also, declaring that vendors "do insecure software by default, just for the sake of developer convenience" I think it is a bit narrow view.

Not that narrow, because security is always seen as friction for user satisfaction, thus is more often that not relaxed or non existent for user satisfaction, time to market, money costs, etc.

I've been working, on and off, with Linux since the 90's and the default security model hasn't changed at all since.

I see this as a problem, because the threat model changed a lot, and the security flaws keep appearing in software that was judged to be secure for decades.

Yet, the Linux ecosystem is still considered one of the most secure ones. Given this statement above,

I agree and that is why I use it in my Desktop, but that doesn't mean that it could not be much better.

Linux firewalls come with default rule to ALLOW ALL traffic. Is this really for developer convenience? How would you be able to access a remote server if the default settings are DROP ALL traffic?

The ssh port registered in IANA is the 22, thus it could be open by default for users with a password, and everything else closed. As it is now you have millions of servers deployed by developers that don't have firewall enabled at all or to far open, and this is due to not knowing or because they forgot, or are just lazy. Either way data breaches keep pilling up and our data exposed.

Or how about if the regular user you connect to in Linux would not have, by default, rights to even write in his own home folder?

If its is home folder off-course the system should create with permission for it to write and read, but not with permissions for the world to read it, as it is now.

I am just saying, there are as many different points of view as people involved, either developers or users, in a specific project.

Yes you are correct, but security is not a first class citizen in these projects, and more often than not its an after though. Once more the current state of data breaches due to the lack of security in lots of things we use daily is just alarming.

Stop for a second and validate first the assumption that you might have just mist something before declaring that a project is insecure by default for the sake of convenience.

I have stopped a lot of times to think on this in my entire career, and more time goes more I am convinced that by nature projects where not built with security as a first class citizen from day zero, but hopefully the huge fines and frequents data breaches are changing this behaviour.

See for example MysQL, ElasticSearch, MongoDB, where all them allow to be installed with insecure defaults, aka without authentication, and then you have them in production public accessible, because the developer forgot or was not knowing that was in need of securing them. You can use Shodan to see some public expose ones. Just change the search query to your software of choice.

Thread Thread
 
rendlerdenis profile image
Denis Rendler

First of all thanks for replying to me, I love to exchange ideas, but sometimes I can be very passionated about them.

Don't worry. I think we all do that, when it comes to security at least. :)

Myself, being a developer constantly fighting with the customer to keep their app safe I agree 99% with you. I agree on the fact that usually security isn't thought of during development and maybe not even after the product has been released. But there is also the fact that there are times when thinking about security too much can hinder a product being released.

The threat model is indeed a living organism which is always changing. That is why we need more people to be aware of security issues. And the new laws and fines will help on that, for a time at least.

But I am wondering how would one go about thinking of the threat models for their product when their product is used in so many different scenarios.
Let's take your database example. How would you go about building a threat model when your product is used for storage, for analytics or for powering the next unsought of product that will help millions of people. A db is used in servers, in mobile apps or even embedded devices. And I think we can continue indefinitely with this example alone.

How would you go about building a threat model that will cover all cases when the threat model is not the same for any two people?

Although myself and my wife share the same house, same car, same hopes and dreams our threat models are radically different. She enjoys and requires usability while myself I need security and awareness above all. For example, a simple backup server for our photos it took me several days to plan, build and configure before I even mentioned to her that I am doing it. And all she wanted was a place where to offload her phone photos but could be reachable when she wanted/needed them. Any cloud solution could have helped with that and made it faster.

Circling back to Docker, Docker doesn't do security, it does virtualization.
The security aspect that results from the application's isolated environment is just a bonus.

I see a lot of people selling Docker as an extra layer of security, which is just wrong. It doesn't offer anything else that any other virtualization environment doesn't offer.

They only do virtualization because that is where they are good at. That is why they offload the security part to other, more experienced, more mature systems like AppArmor, SELinux, Linux's user management and network handling.

That is why I said Docker is not bypassing the firewall, but instead it uses it to connect the virtualized environment where the app runs to the real world.

The docs are missing, or aren't putting this information more upfront? I agree! But let's be honest, who's docs don't need improvements.

And I would love to continue, but I need to come back to my project. Who knows, maybe someday we will meet face to face and continue this discussion. I would certainly love so.

Collapse
 
kovah profile image
Kevin Woblick

Yes, sadly this is true. Hope that at least the notice will be merged into the documentation so other users are warned about this.

Collapse
 
jannikwempe profile image
Jannik Wempe

Hey Kevin, thanks for the infos :-)
I am not quite sure if I am right, but maybe you should use "expose" instead of "ports".

From the docs:
Expose ports without publishing them to the host machine - they’ll only be accessible to linked services. Only the internal port can be specified.

Nevertheless, I didn't know that docker is somehow bypassing the firewall if a port is exposed to the host.

Collapse
 
exadra37 profile image
Paulo Renato

When you use 9200:9200 you are indeed using 0.0.0.0:9200:127.0.1:9200 and this a design flaw in docker, because 0.0.0.0 will expose you to the host and to world.

Regarding expose I think is only their for backward compatibility with the deprecated option --links in order to allow inter container communication inside the docker network, not for communication with the machine hosting the docker engine.

Here the use case is to really expose to the host what its running inside the container, thus it really need to use ports, but always with the prefix 127.0.0.1.

Collapse
 
rendlerdenis profile image
Denis Rendler

Docker is NOT bypassing the firewall. It creates rules inside the kernel to redirect traffic that comes to the host, from the hosts specific port to the app inside the container. As such, these rules are validated before your filter rules because the routing is done before the kernel starts checking the filter table rules. As such, if the container responds to the packet saying "it is for me" the kernel then says "handle it" and moves on to the next packet. Otherwise it goes on to check the other rules until either one matches or uses the default action - which on most Linux OSs is ALLOW.

Collapse
 
jannikwempe profile image
Jannik Wempe

Ah okay, I get it. Thanks for your explanations. Just thought the proxy would be in the same docker network.

Collapse
 
kovah profile image
Kevin Woblick

Absolutely. Using expose, or no port binding at all, is the safer approach. Unfortunately this does not work if you want to expose ports via proxy, which was the case for me. :(

Collapse
 
rendlerdenis profile image
Denis Rendler

Hey, Kevin.

I really liked your article, especially as I got bit the same way with exactly the same ELK stack :). But I think there are a few points that might need a bit of reconsidering.
The thing is that iptables itself "is used to set up, maintain, and inspect the tables of IP packet filter rules in the Linux kernel". And which comes by default with, I think, all Linux distros. It is just an interface between the user and the kernel. If I am not mistaken UFW is the same. Allowing an easy interface for the user to configure rules.

Given that, Docker, indeed, is using iptables to make a few magic tricks with its networking, but it is explained here: docs.docker.com/network/iptables/ and a bit more here: docs.docker.com/config/containers/... and a few other scattered places in the docs. Nobody's docs are perfect :)

An improvement that I think you could do, and which helped me, is to add a rule to the DOCKER-USER chain to log and drop all packets. That way you are safe from future mistakes.
Another trick I did was to have my proxy, HAProxy in my case, run inside a container and I simply created a network between the proxy and the ELK stack. That way I no longer needed to map ports to the host anymore. Everything was contained inside the Docker network and from the host only ports 80 and 443 where allowed. Whenever I need to add a new service I just attach it to the HAProxy network and voila.

Collapse
 
rendlerdenis profile image
Denis Rendler

Also, this option might help you:
--ip ip Default IP when binding container ports (default 0.0.0.0) - click for docs

You can configure it through daemon.json located at /etc/docker/
and it will always map host ports to the 127.0.0.1 if you want.

I hope it help. ;)

Collapse
 
acidiney profile image
Acidiney Dias

In the same way i have used docker run -it -v mongodata:/data/db -p 127.0.0.1:27017:27017 --name mongodb -d mongo to solve ufw security issue problem

Collapse
 
raiandian profile image
Ryan Jan Borja

If I ran a postgres container (docker run -d —network custom_net —publish 127.0.0.1:5433:5432 postgres) wouldn’t that be accessible only to the docker network and not from the host machine?

Collapse
 
kovah profile image
Kevin Woblick • Edited

In your case the container wouldn't be able to be accessed from the outside anyway, because you specified the publiched ports as 127.0.0.1:5433:5432. However, if you publish the ports globally, you would still be able to access the container from the outside, like docker run -d —network custom_net —publish 5433:5432 postgres.
The thing is that Docker networks are public by default and connected to the host network. You would have to create your network with the --internal flag. But this would make it impossible to access Postgres even while you are on your host, because it now runs in a completely isolated network.

So, specifying your ports with 127.0.0.1:5432 is the most reliable and secure way.

Collapse
 
raiandian profile image
Ryan Jan Borja

(If I get it right), If I only want PostgreSQL to be accessible from the host machine I should not include a custom network and my docker run command should look like this.
docker run -d --publish 127.0.0.1:5433:5432. If I want it to be accessible within the network or even outside, I should remove 127.0.0.1.

Thread Thread
 
kovah profile image
Kevin Woblick

Yes, that's correct.

Collapse
 
binary_maps profile image
Jadran Mestrovic

Docker daemon is the brain of the entire operation which sits on server component