DEV Community

Gustavo Silva
Gustavo Silva

Posted on • Edited on

Securing containers with proxy reverses

Introduction

Docker is a wonderful tool. There are lots of resources out there to help you get started but I feel they easily overlook how you can secure your host machine in a production environment.

This post attempts to address one possible configuration setup that is more restrictive about external access and good for projects that have all their services configured under a docker environment.

The setup here described is what I've used for a docker-based service composed of a Redis Cache database, a Mongo database, a NodeJS + Express RESTful API and a web application written in ReactJs that we have in the place I work at. The motivation to write this post is that, one day, we discovered someone breached our server and injected some curl commands into a Redis keys. Minor stuff in the end, no data was leaked nor permissions were revoked. However, this had me intrigued and forced me to review how we configured our stack.

Lesson 0: Set up authentication for all persistence layers

Redis, Mongo, MySQL, whatever other option, they all must required authentication to write to the system. We had this since the start but I see too many resources neglecting this step. You can go (and you probably should go) with more complex options, like token based, or key-based. The bare minimum is something as simple as an username and a password. Oh, and please do not choose 123456 as your password.

Lesson 1: Close all ports you don't need.

In our initial approach, we were just inexperienced. We had all containers ports open in the host's network. Anyone could access our services if they used the default ports. A quick example: If you set your containers like this, someone with bad intentions can easily discover your services and attempt to inject or brute-force anything. We discussed and found out we only needed something in the likes of port 80, 443 and 22 (HTTP, HTTPs and SSH, respectfully).

Lesson 2: Setting the Container's ports properly

After figuring out we have been hacked, we started spending more time monitoring our services only to find out someone was attempting to run queries against our MongoDB. This was strange, because we had closed our ports, allegedly. Yes, the service required authentication but this was annoying. And nothing was happening to them - when it should!

Here are some steps on how to set docker ports properly on your host machine.

Note: Remember that each case has its specifications and this is not the only way to set things up. This is just a setup suggestion that I felt more secure using.

Docker-compose port configuration

Most tutorials and guides overlook Docker network and don't explain completely what is happening in the background in your host machine. Generic compose:

   services:
       webapp:
           ...
           ports:
               - "2019:8080"
       database:
           ...
           ports:
               - "3306:8080"
Enter fullscreen mode Exit fullscreen mode

This configuration is exposing port 8080 of both services in your host's network. This allows remote connections. For instance, you can connect to the service using vps-ip:3306. Here's how we figured out we should configure the services:

   services:
        webapp:
            ...
            ports:
               - "172.17.0.1:2019:8080"
        database:
            ...
            ports:
               - "172.17.0.1:3306:8080"
Enter fullscreen mode Exit fullscreen mode

When you install docker in your machine, it will automatically add a network interface, which you can check using ip a in a Unix machine. Notice there is one entry that it's id is docker0. Its IP is typically the one written above and it will not expose that network to the host's network.
In summary, the above configuration ensures your services are reachable by your host but never from the exterior. You can now test attempting to connect to the service with the command above vps-ip:3306. It will not connect.

Lesson 3: Configure your firewall and fail2ban aggressively

As mentioned above, one of the most annoying things was checking the logs and seeing people were attempting to run things. Not only does this motivates them to continue but it also leaves you restless knowing someone keeps trying to hack you. So, my solution was to aggressively setup the firewall and fail2ban.

I will not extend much over iptables because I have found this article from Digital Ocean that contains enough information about it.

In summary, these are the most important commands:

  • sudo ufw default deny incoming - Blocks all external traffic.
  • sudo ufw allow ssh - Allows SSH connections.
  • sudo ufw app list - to ensure nginx is installed in our system and then - sudo ufw allow 'Nginx Full' - Allows HTTP/HTTPS connections, both external and to the outside.

As for fail2ban, I decided to ban for 1 week after 5 retries. This is what I have added to a new file in /etc/fail2ban/jail.d/ssh.local:

    [sshd]
    maxretries = 5
    bantime = 604800
Enter fullscreen mode Exit fullscreen mode

The next step was to restart the fail2ban service to apply the rules, using fail2ban-client restart. Beware that more settings can be added to fail2ban, which some might be worth looking into.

Finally, you might be wondering how are services communicating, which leads to the next topic:

Lesson 4: Reverse proxy all the things!

I admittedly am a fan of this solution. There are other ways to configure your HTTP server, in this case NGiNX - furthermore as nginx. Here's how you could tell nginx to listen to your application:

    server {
        listen 80;
        listen 443 ssl;

        access_log /var/log/nginx/web.access.log;
        error_log /var/log/nginx/web.error.log;

        server_name web.my_domain.app;
        include web.my_domain.app.conf;
    }

Enter fullscreen mode Exit fullscreen mode

I always found it easier to maintain nginx configurations this way. There is no particular reason other than the fact it decouples server-specific configurations from nginx-specific configurations. If I get bad gateways it means nginx is not properly configured; Conversely, if I get CORS errors I know the server is poorly configured.

When we talk about web security, nowadays we always have to include SSL certificates. I always recommend using certbot and Let's Encrypt. certbot really has all instructions in their website and does a lot of things for you. I wouldn't bother doing any of that manually. I understand, though, some projects may require manual work. Normally, I follow certbot's documentation for the Nginx HTTP server. Finally, do not forget to add the cronjob to automatically renew the certificates. There is some discussion as to how often you should do it but just pick something that suit your needs. In the end, this is how your /etc/nginx/sites-available/default should look like, if you opt to forward HTTP traffic to HTTPS (your end-users definitely appreciate it):

    server {
        if ($host = web.my_domain.app) {
            return 301 https://$host$request_uri;
        } # managed by certbot

        access_log /var/log/nginx/web.access.log;
        error_log /var/log/nginx/web.error.log;

        ssl_certificate /etc/letsencrypt/live/web.my_domain.app/fullchain.pem; # managed by certbot
        ssl_certificate_key /etc/letsencrypt/live/web.my_domain.app/privkey.pem; # managed by certbot

        listen 80;
        listen 443 ssl;
        server_name web.my_domain.app;
        include ssl.conf; # managed by certbot
        include web.my_domain.app.conf;
    }

Enter fullscreen mode Exit fullscreen mode

And this is how your site configuration should look like under /etc/nginx/web.my_domain.app.conf:

    resolver 8.8.8.8;
    add_header 'Access-Control-Allow-Origin'        $http_origin                                            always;
    add_header 'Access-Control-Allow-Methods'       'GET, PUT, POST, OPTIONS'                always;
    add_header 'Access-Control-Expose-Headers'      'authorization, Authorization'                          always;
    add_header 'Access-Control-Allow-Headers'       'Accept, Content-Type'  always;

    if ($request_method = 'OPTIONS') {
        return 204;
    }

    location ~ / {
        proxy_set_header        X-Real-IP          $remote_addr;
        proxy_set_header        X-Forwarded-For    $proxy_add_x_forwarded_for;
        proxy_set_header        X-Forwarded-Host   $host:80;
        proxy_set_header        X-Forwarded-Server $host;
        proxy_set_header        X-Forwarded-Port   80;
        proxy_set_header        X-Forwarded-Proto  https;
        proxy_intercept_errors off;

        proxy_pass_request_headers on;
        proxy_pass http://172.17.0.1:2019;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
Enter fullscreen mode Exit fullscreen mode

This is the configuration itself, setting the proxy reverse, configuring allowed headers and origins (to avoid CORS).
This ensures your services only communicate through the host's network and the outside world can never reach them.

Last minute checks

In case you want to confirm you only have your specified ports open, you can run netstap -pln and check the output. You should only have port 80, 443 and 22.

Summary

In this post, we have learned how we can setup Linux containers (through Docker) and microservices in a server. This setup hardens your services, denying external access to them. Internally though, containers are communicating with each other, as long as they are all in the same network.
You may be wondering how you can gain access to your databases in applications like MySQL Workbench or Robot3T then, if all external traffic is disabled. You can do it using SSH tunnels, either via username/password combo. Preferably, do some proper SSH keys setup and do it with the SSH keys instead. Username/password combos are better than nothing but it's fairly insecure.

Thanks for reading and let me know in the comments what you would do differently.

Fun fact: We initially had 95 IPs banned. After the breach and after configuring our services like this, we are noticing we have about 300 IPs banned constantly and the SSH service has experienced significant decrease in access attempts.

Top comments (0)