DEV Community

Chris Cornutt
Chris Cornutt

Posted on

Securing Credentials for PHP with Docker

** Previously posted on my site, Websec.io

In a previous post I covered one method you can use to secure the credentials in your PHP application. In that article, I provided an example specific to the use of Apache and its envvars handling to read in values and pass them along to the waiting PHP process as $_ENV variables. This in combination with the psecio/secure_dotenv library allowed you to pass along an encryption key that could be used to decrypt values from the application's .env file.

While this works for a flat Apache and PHP environment, the world has moved beyond that basic setup and has moved to using another popular environment building tool: Docker. Docker makes it possible to create environments out of interconnected containers that are specialized for specific purposes. This makes it easier to replace technologies in your stack with new versions and makes the containers more reusable. A simple configuration file is all it takes to set up an environment and can be used to rebuild it at any time.

So, if we move forward with current technology, we need a way to secure our credentials in a Docker-based environment that makes use of PHP-FPM and Nginx. Fortunately, there's a relatively simple way to handle this with just a few configuration changes. To make the setup even more robust, I'm also going to show you how to integrate Vault into the flow for secrets storage.

Vault is a project from Hashicorp that is specifically designed to protect secret values. To be able to access the values, you "unlock" the service and can then fetch your secret values via an API. Once complete, you can "lock" the service back up, preventing anyone without lock/unlock access from reaching the secret values.

Environment Overview

Before we dive into the configurations and files required to make this all happen, I wanted to take a brief look at the pieces involved here and how they'll fit together. I've mentioned some of them above in passing but here's the list all in one place:

  1. Docker to build and manage the environment
  2. Nginx to handle the web requests and responses
  3. PHP-FPM to parse and execute the PHP for the request
  4. Vault to store and manage the secrets

I'll also be making use of a simple Vault client - psecio/vaultlib to make the requests to Vault for the secrets. Using a combination of these technologies and a bit of configuration a working system isn't too difficult.

Protecting Credentials

There are several ways to get secrets into a Docker-based environment, some being more secure than others. Here's a list of some of these options and their pros and cons:

Passing them in as command-line options

One option that Docker allows is the passing in of values on the command-line when you're bringing up the container. For example, if you wanted to execute a command inside of a container and pass in values that become environment variables, you could use the -e option:

docker run -e "test=foo" whoami

In this command, we're executing the whoami command and passing in an environment variable of test with a value of foo. While this is useful, it's limited to only being used in single commands and not in the environment as a whole when it starts up. Additionally, when you run a command on the command-line, the command and all of its arguments could show up in the process list. This would expose the plain-text version of the variable to anyone with access to the server.

Using Docker "secrets"

Another option that's one of the most secure in the list is the use of Docker's own "secrets" handling. This functionality allows you to store secret values inside of an encrypted storage location but still allows them to be accessed from inside of the Docker containers. You use the docker secret command to set the value and grant access to the services that should have access. Their documentation has several examples of setting it up and how to use it in more real-world situations (such as a WordPress blog).

While this storage option is one of the better ones, it also comes with a caveat: it can only be used in a Docker Swarm situation. Docker Swarm is functionality built into Docker that makes it easier to manage a cluster of Docker instances rather than just one. If you're not using Swarm mode, you're out of luck on using this "secrets" storage method.

Hard-coding them in the docker-compose configuration

There's another option with Docker Compose to get values pushed into the environment as variables: through settings in the docker-compose.yml configuration file.

The Setup

Before I get too far along in the setup, I want to outline the file and directory structure of what we'll be working with. There are several configuration files involved and I wanted to call them out so they're all in place.

For the examples, we'll be working in a project1/ directory which will contain the following files:

  • docker-compose.yml
  • www.conf
  • site.conf
  • .env

Staring with Docker

To start, we need to build out the environment our application is going to live in. This is a job for Docker or, more specifically Docker Compose. For those not familiar with Docker Compose, you can think of it as a layer that sits on top of Docker and makes building out the environments simpler than having a bunch of Dockerfile configuration files lying around. It joins the different containers together as "services" and provides a configuration structure that abstracts away much of the manual commands that just using the docker command line tool would require.

In a Compose configuration file, you define the "services" that you want to create and various settings about them. For example, if we just wanted to create a simple server with Nginx running on port 8080, we could create a docker-compose.yml configuration like this:

version: '2'

services:
  web:
    image: nginx:latest
    ports:
      - "8080:80"

Easy, right? You can create the same kind of thing with just Dockerfile configurations but Compose makes it a bit simpler.

The docker-compose.yml configuration

Using this structure we're going to create our environment that includes:

  • A container running Nginx that mounts our code/ directory to its document root
  • A container running PHP-FPM (PHP 7) to handle the incoming PHP requests (linked to the Nginx container)
  • The Vault container that runs the Vault service (linked to the PHP container)

Here's what that looks like:

version: '2'

services:
  web:
    image: nginx:latest
    ports:
      - "8080:80"
    volumes:
      - ./code:/code
      - ./site.conf:/etc/nginx/conf.d/site.conf
    links:
      - php

  php:
    image: php:7-fpm
    volumes:
      - ./www.conf:/usr/local/etc/php-fpm.d/www.conf
      - ./code:/code
    environment:
      - VAULT_KEY=${VAULT_KEY}
      - VAULT_TOKEN=${VAULT_TOKEN}
      - ENC_KEY=${ENC_KEY}

  vault:
    image: vault:latest
    links:
      - php
    environment:
      - VAULT_ADDR=http://127.0.0.1:8200

Lets walk through this so you can understand each part. First we create the web service - this is our Nginx container that installs from the nginx:lastest image. It then defines the ports to use, setting up the container to respond on port 8080 and proxy that to port 80 on the local machine (the default port for HTTP). The volumes section defines two things to mount from the local system to the remote system: our code/ directory and the site.conf that's copied over to the Nginx configuration path of /etc/nginx/conf.d/site.conf. Finally, in the links section, we tell Docker that we want to link the web and php containers so they're aware of each other. This link makes it possible for the Nginx configuration to be able to call PHP-FPM as a handler on *.php requests. The contents of the site.conf file are explained in a section later in this article.

Next is the php service. This service installs from the php:7-fpm image, loading in the latest version of PHP-FPM that uses a 7.x version. Again we have a volumes section that copies over the code/ to the container but this time we're moving in a different configuration file: the www.conf configuration. This is the configuration PHP-FPM uses when processing PHP requests. More on this configuration will be shared later too.

What about the environment settings in the php service, you might be asking. Don't worry, I'll get to those later but those are one of the keys to how we'll be getting values from Docker pushed into the service containers for later use.

Finally, we get to the vault service. This service uses the vault:latest image to pull in the latest version of the Vault container and runs the setup process. There's also a link over to the php service so that Vault and PHP can talk. The last part there, the environment setting, is just a Vault-specific setting so that we know a predictable address and port to access the Vault service from PHP.

The site.conf configuration (Nginx)

I mentioned this configuration before when walking through the docker-compose.yml configuration but lets get into a bit more detail. First, here's the contents of our site.conf:

server {
    index index.php index.html;
    server_name php-docker.local;
    error_log /var/log/nginx/error.log;
    access_log /var/log/nginx/access.log;
    root /code;

    location ~ \.php$ {
        try_files $uri =404;
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass php:9000;
        fastcgi_index index.php;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $fastcgi_path_info;
    }
}

If you've ever worked with PHP-FPM and Nginx before, this configuration probably looks pretty similar. It sets up the server configuration (with the hostname of php-docker.local, add this to 127.0.0.1 in /etc/hosts) to hand off any requests for .php scripts to PHP-FPM via FastCGI. Our index setting lets us use either a index.php or index.html file for the base without having to specify it in the URL. Pretty simple, right?

When we fire up Docker Compose this configuration will be copied into the container at the /etc/nginx/conf.d/site.conf path. With that settled, we'll move on to the next file: the PHP-FPM configuration.

The www.conf configuration (PHP-FPM)

This configuration sets up how the PHP-FPM process behaves when Nginx passes the incoming request over to it. I've reduced down the contents of the file (removing extra comments) to help make it clearer here. Here are the contents of the file:

[www]
user = www-data
group = www-data

listen = 127.0.0.1:9000

pm = dynamic
pm.max_children = 5
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3

clear_env = no

env[VAULT_KEY] = $VAULT_KEY
env[VAULT_TOKEN] = $VAULT_TOKEN

While most of this configuration is default settings, there are a few things to note here, starting with the clear_env line. PHP-FPM, by default, will no import any environment variables that were set when the process started up. This clear_env setting being set to no tells it to import those values and make them accessible to Nginx. In the next few lines, there are a few values that are manually defined with the env[] directive. These are variables that come from the environment and are then passed along to the PHP process as $_ENV values.

If you're paying attention, you might notice how things are starting to line up between the configuration files and how environment variables are being passed around.

This configuration will be copied into place by Compose to the /usr/local/etc/php-fpm.d/www.conf path. With this file in place, we get to the last piece of the puzzle: the .env file.

The .env configuration (Environment variables)

One of the handy things about Docker Compose is its ability to read from a default .env file when the build/up commands are run and automatically import them. In this case we have a few settings that we don't want to hard-code in the docker-compose.yml configuration and don't want to hard-code in our actual PHP code:

  • the key to seal/unseal the Vault
  • the token used to access the Vault API
  • the key used for the encryption of configuration values

We can define these in our .env file in the base project1/ directory:

VAULT_KEY=[ key to use for locking/unlocking ]
VAULT_TOKEN=[ token to use for API requests]

Obviously, you'll want to replace the [...] strings with your values when creating the files.

NOTE: DO NOT use the root token and key in a production environment. Using it here is only for example purposes without having to get into further setup and configuration on the Vault instance of other credentials to use. For more information about authentication method options in Vault, check out this page in their manual.

One of the tricky things to note here is that, when you (re)build the Vault container, it starts from scratch and will drop any users you've created (and even reset the root key/token). The key here is to grab these values once the environment is built, put them into the project1/.env and then rebuild the php service to pull the new environment values in:

docker-compose build -d php

It's all about the code

Alright, now that we've worked through the four configuration files needed to set up the environment we need to talk about code. In this case, it's the PHP code that lives in project1/code/. Since we're going to keep this super simple, the example will only have one file: index.php. The basic idea behind the code is to be able to extract secrets values from the Vault server that we'll need in our application. Since we're going to use the psecio/vaultlib library, we need to install it via Composer:

composer require psecio/vaultlib

If you run that on your local system in the project1/code/ directory, it will set up the vendor/ directory with everything you need. Since the code/ directory is mounted as a volume on the php service, it will pull it from the local version when you make the web request.

With this installed, we can then initialize our Vault connection and set our first value:

<?php
require_once __DIR__.'/vendor/autoload.php';

$accessToken = $_ENV['VAULT_TOKEN'];
$baseUrl = 'http://vault:8200';

$client = new \Psecio\Vaultlib\Client($accessToken, $baseUrl);

// If the vault is sealed, unseal it
if ($client->isSealed() == true) {
    $client->unseal($_ENV['VAULT_KEY']);
}

// Now set our secret value for "my-secret"
$result = $client->setSecret('my-secret', ['testing1' => 'foo']);
echo 'Result: '.var_export($result, true);

?>

Now if you make a request to the local instance on port 8080 and all goes well, you should see the message "Result: true". If you see exceptions there might be something up with the container build. You can use docker-compose down to destroy all of the current instances and then docker-compose build; docker-compose up to bring them all back up. If you do this, be sure to swap out the Vault token and key and rebuild the php service.

In the code above we create an instance of the Psecio\Vaultlib\Client and pass in our token pulled from an environment variable. This variable exists because of a few special lines in our configuration file. Here's the flow:

  1. The values are set in the .env file for Docker to pull in.
  2. Those values are pushed into the php container using the environment section in the docker-compose.yml configuration.
  3. The PHP-FPM configuration then imports the environment variables and makes them available for use in the $_ENV superglobal.

These secrets exist in-memory in the containers and don't have to be written to a file inside of the container itself where they could potentially be compromised at rest. Once the Docker containers have started up, the .env file can be removed without impacting the values inside of the containers.

The tricky part here is that, if you remove the .env file once the containers are up and running, you'll need to put it back if there's ever a need to run the build command again.

But why is this good?

I started this article off by giving examples of a few methods you could use for secret storage when Docker is in use but they all had rather large downsides. With this method, there's a huge plus that you won't get with the other methods: the secrets defined in the .env file will only live in-memory but are still accessible to the PHP processes. This provides a pretty significant layer of protection for them and makes it more difficult for an attacker to access them directly.

I will say one thing, however. Much like the fact that nothing is 100% secure, this method isn't either. It does protect the secrets by not requiring them to be sitting at rest somewhere but it doesn't prevent the $_ENV values from being accessed directly. If an attacker were able to perform a remote code execution attack - tricking your application to run their code - they would be able to access these values.

Unfortunately, because of the way that PHP works there's not a very good built-in method for protecting values. That's why Vault is included in this environment. It's designed specifically to store secret values and protect them at rest. By only passing in the token and key to access it, we're reducing the risk level of the system overall. Vault also includes controls to let you fine-tune the access levels of your setup. This would allow you to do something like creating a read-only user your application can use. Even if there was a compromise, at least your secret values would be protected from change.

Hopefully, with the code, configuration and explanation I've provided here, you have managed to get an environment up and running and can use it to test out your own applications and secrets management.

Resources

Top comments (0)