DEV Community

André König
André König

Posted on • Edited on

Docker: Restricting in- and outbound network traffic

Imagine a scenario in which you might have a stinky module deep in your dependency graph. A dependency that wants to do something evil – a malware. "But I'm isolating everything in a Docker container at runtime!", you might say. Indeed, that helps when the evil module tries to mess up with your filesystem or other host related aspects. But what if the module wants to phone home?”

I thought about that problem today and want to share my approach with you. Before I headed straight into tinkering, I created the following acceptance criteria:

  1. The container should accept in- and outbound traffic from and to a known network
  2. The container should block in- and outbound traffic from and to all other networks
  3. The application within the container should run as a non-privileged user

"A privileged user is necessary for restricting network traffic." was my first thought which conflicts with the third acceptance criteria. Meh!

After implementing some test probes, I settled with the following solution:

  • Provisioning of a base image which ships with a non-privileged user and an ENTRYPOINT script
  • The ENTRYPOINT script gets executed when the container starts, defines the iptables rules and starts the given application as the configured non-privileged user afterwards.

Let's have a look at the actual solution. The Dockerfile:

FROM node:8-alpine

LABEL maintainer="André König <andre.koenig@gmail.com>"

RUN apk add --update curl iptables sudo && \
    addgroup -S app && adduser -S -g app app

COPY entrypoint.sh /entrypoint.sh

ENTRYPOINT ["/entrypoint.sh", "--"]
Enter fullscreen mode Exit fullscreen mode

As you can see, pretty straight forward: Using a base image – in this case the Node.js 8 Alpine image, adding a new user group and creating a new user. The second part copies the entrypoint.sh script into the container and defines it as an ENTRYPOINT. Nothing special here. So let's define the actual entrypoint.sh:

#!/usr/bin/env sh

#
# iptables configuration
#
# The following allows in- and outbound traffic
# within a certain `CIDR` (default: `192.168.0.0/24`),
# but blocks all other network traffic.
#
ACCEPT_CIDR=${ALLOWED_CIDR:-192.168.0.0/24}

iptables -A INPUT -s $ACCEPT_CIDR -j ACCEPT
iptables -A INPUT -j DROP
iptables -A OUTPUT -d $ACCEPT_CIDR -j ACCEPT
iptables -A OUTPUT -j DROP

#
# After configuring `iptables` as root, execute
# the passed command as the non-privileged `app` user.
#
sudo -u app sh -c "$@"
Enter fullscreen mode Exit fullscreen mode

Usage examples

After bulding the image via docker build -t node-sandbox ., let's test drive this new sandboxed environment.

For example, in order to test if there is really no outbound traffic, try:

docker run --privileged -it --rm node-sandbox "curl https://google.com"
Enter fullscreen mode Exit fullscreen mode

curl should give up after some time and you will see an curl: (6) Could not resolve host: google.com error.

On the other hand, communicating with a system in your local network should work. So if you live in the 192.168.0.0/24 network and you have something running on 192.168.0.1 (maybe your wifi router), you should see a response when executing:

docker run --privileged -it --rm node-sandbox "curl http://192.168.0.1"
Enter fullscreen mode Exit fullscreen mode

The attentive reader might have noticed that the entrypoint.sh checks if the environment variable ALLOWED_CIDR is set and takes the CIDR notated network instead of the fallback:

docker run --privileged -it --rm -e "ALLOWED_CIDR=10.0.0.0/8" node-sandbox "curl http://10.0.0.1"
Enter fullscreen mode Exit fullscreen mode

Conclusion

One of my clients had to tranform highly sensible user data within a Node.js-based application over and over again. Because of high security standards within that organization, they anticipated an isolated network environment in which no possible evil dependency could ever reach out and send sensible information to an external system. Despite the fact that the application doesn't run within a container orchestrator where you have fine-grained control over the network stack, but on a user machine, this seems to be a solution that works quite well. If you face a similar situation, this approach might assist you and I'm happy to read your views.

Top comments (14)

Collapse
 
rickyzhang82 profile image
Ricky Zhang • Edited

I don't think this is safe approach. Because your container by default run as root (I mean root as host and container). Anyone who can access your container can do privilege escalation. Eg, "docker exec -it your_cotainer /bin/bash". All he need is that he is in docker group.

Here is the proper way to do it:

  1. Use non-root account.
  2. Drop your privilege when launching container.
  3. In your host, block outbound access and limit inbound access by iptables.
Collapse
 
andre profile image
André König • Edited

Thanks for your reply.

Drop your privilege when launching container

Exactly. That is the reason why the actual application is running as a non-privileged user within the container. The post is pretty old and there are better ways on the orchestration layer nowadays, but the key idea is to isolate network traffic within the container.

Collapse
 
rickyzhang82 profile image
Ricky Zhang • Edited

No, your container still run as root. Use USER instruction in your docker file.

When you launch container, you add --privileged option. This will let anyone in docker group, access your /dev. He can access file system.

In addition, you should apply iptable rules in the host (outside of the container).

Thread Thread
 
andre profile image
André König

Jap, that is true BUT the actual application gets executed as a non-privileged user (see ENTRYPOINT script).

With other words: Yes, the container is running as root (otherwise it wouldn‘t be possible to configure the iptable rules), but the application (in this case „curl“) runs as a non-privileged user.

The respective line is:

sudo su <user> sh -c <command>
Thread Thread
 
rickyzhang82 profile image
Ricky Zhang • Edited

Neither of us are native English speaker. But I want to state that your idea is wrong.

First, do iptables change in the host. You don't have to do it inside the container. Then you don't need to be user root in Dockerfile.

Secondly, your container still runs as root and launch with --privileged options. Anyone with docker group permission can go inside your container. Then he can access /dev. Do whatever read/write on your hardware/software device freely. This is a typical privilege escalation. Don't you agree with me?

Thirdly, you don't need to say you switch account. I can read. But it doesn't change the fact that your container still run as root.

Again, if you don't get it, it is fine. I'm done with my explanation.

But I hope you should remove this blog and stop misleading others.

This is very important to keep our Internet safe.

PS: my colleague called me calm down. Because neither of us listened to each other. So I wrote a simple PoC in my repo: github.com/rickyzhang82/demo-misco...

Thread Thread
 
andre profile image
André König • Edited

First of all, I don't like the tone of your comments. There is no need for being harsh when arguing from a different perspective and (especially) when having a different use case in mind.

Again, if you don't get it, it is fine. I'm done with my explanation.

I doubt that you have read the blog post in much detail which is fine, but a little bit of restraint would be appropriate IMHO. Running containers as root is bad in general. That is nothing we have to really discuss.

According to his approach, anyone with docker group permission can do some serious damage as root and bypass his firewall rule defined inside the container.

Right, this is the major misunderstanding of the described approach.

Let me put it this way: You could implement the described approach as an ordinary bash script which gets executed as root (on the host), dynamically configures iptables rules, switches to a non-privileged user and executes the application. That is all the contents in the post is about. Therefore, my approach is as safe as executing an application as a non-privileged user on a "not containerized" system. The container acts just as a portable runtime environment (for Node.js and the application dependencies), nothing more.

Regarding your PoC: Thanks for demonstrating your perspective. You should always be careful about who you add to the Docker group. After all, in my case there is only one user who is in this group and therefore has access to communicate with the Docker daemon: the operator of the actual host.

Thread Thread
 
rickyzhang82 profile image
Comment marked as low quality/non-constructive by the community. View Code of Conduct
Ricky Zhang

I read your blog carefully and understood your approach completely.

You violated the principle of least privilege(en.wikipedia.org/wiki/Principle_of...). You really don't have to keep root permission in Dockerfile and add an option --privileged during container launching to impose firewall rule by iptables. It is completely unnecessary.

The better way is to add your firewall rules in the host to DOCKER-USER chain. BTW, only root in the host can modify firewall rule. It is secured. No one in docker group can modify the rule.

If you can do it in the host, why do you want to keep root and privileged option in container? Don't you think you violated PoLP?

Regarding my comment style, it might be harsh. Should I have to be political correct to say "hey, you might overlook xyz in your approach." Probably not. I think it is better this way: if it is wrong, just say it is wrong and correct it. Why do we have to sugarcoat everything we say and self-censor ourselves?

Thread Thread
 
andre profile image
André König • Edited

[...] my colleague called me calm down.

As your colleague told you as well: calm down and then reconsider my described use case (portable runtime environment, etc.).

The aspired direction of your discussion style is toxic and I'm not interested in being part of it.

Again, I'm open for a healthy debate, but your style of writing doesn't fulfill this requirement.

Have a nice day.

Thread Thread
 
rickyzhang82 profile image
Ricky Zhang • Edited

It is very subjective to determine the debate is healthy or not. But it is objective to determine the approach is right or wrong. You have no ground to dispute that the fact that you violated the principle of least privilege.

Have a good weekend, too.

Thread Thread
 
andre profile image
André König • Edited

Well, I did not violate PoLP because of the fact that the subject to isolate is the actual application, but this is the aspect you don't want to see. Anyways ...

Thread Thread
 
bbenzikry profile image
Beni Ben zikry

Hi André, I came across the post while looking for something completely unrelated but just had to reply and say I'm really sorry you had to endure this entire thread.

As you mentioned ( and as this post is indeed old ) there are more expressive ways to deal with those issues today on the orchestration layer, and with many k8s options for local testing ( Kind, microk8s, minikube etc. ), one can easily configure and test privileges, assign granular security contexts, define network policies and control and monitor ingress/egress traffic.

In a real life scenario I would take this a step further and try to sniff outgoing requests with something like ksniff to look at what goes out to the C&C / output.

Thread Thread
 
andre profile image
André König

Hi Beni, that is good to hear. Thanks a lot for your kind words :)

Collapse
 
cognish profile image
Mark Gordon

Nice technique, especially given the recent Log4j vuln. You can block outbound traffic using K8S, but for simple appsk, that can be simply Too Much Bloat!

Some comments may only be visible to logged-in visitors. Sign in to view all comments.