How would you Dockerize PHP app?

Stepan Vrany on May 05, 2019

This weekend I was trying to Dockerize simple (stateless) PHP application. Or more specifically - it's mostly html with a few PHP files which proce... [Read Full]
markdown guide

I have Apache and PHP in one container, and my MySQL database in another container. Here's my Dockerfile for reference (it's for a Laravel app):

# Use an official PHP runtime as a parent image
FROM php:7.3-apache-stretch

ARG DEBIAN_FRONTEND=noninteractive

# Install any needed packages
RUN apt-get update -yqq && \
  apt-get install -yqq --no-install-recommends \
    apt-utils \
    openssl \
    libzip-dev zip unzip \
    git \
    mariadb-client \
RUN docker-php-ext-configure zip --with-libzip
RUN docker-php-ext-install pdo_mysql zip
RUN docker-php-ext-configure gd
RUN docker-php-ext-install gd
RUN php -r "copy('', 'composer-setup.php');"
RUN php -r "if (hash_file('sha384', 'composer-setup.php') === '48e3236262b34d30969dca3c37281b3b4bbe3221bda826ac6a9a62d6444cdb0dcd0615698a5cbe587c3f0fe57a54d8f5') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;"
RUN php composer-setup.php
RUN php -r "unlink('composer-setup.php');"
RUN mv composer.phar /usr/local/bin/composer

ENV APACHE_DOCUMENT_ROOT /var/www/html/public

RUN sed -ri -e 's!/var/www/html!${APACHE_DOCUMENT_ROOT}!g' /etc/apache2/sites-available/*.conf
RUN sed -ri -e 's!/var/www/!${APACHE_DOCUMENT_ROOT}!g' /etc/apache2/apache2.conf /etc/apache2/conf-available/*.conf
RUN a2enmod rewrite

USER root

# Clean up
RUN apt-get clean && \
    rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* && \
    rm /var/log/lastlog /var/log/faillog

RUN usermod -u 1000 www-data

And in case it's useful as well, here's the accompanying docker-compose.yml file:

version: '3'

    build: .
    restart: always
      - "80:80"
    container_name: my-api
      - my-db
      - ./:/var/www/html:delegated

    image: mysql:5.7
    restart: always
      - "3306:3306"
    container_name: my-db
      - ../db-data:/var/lib/mysql:cached

If you haven't already, take a look at Laradock. I didn't end up using it, but looking over their setup was very useful.

I'm no docker expert, but let me know if you have questions about the above, and if there's anything you see which can be improved.


If you don't require Apache you can further split your services by running nginx and php-fpm in different containers with necessary configurations.

Also, your RUN statements can be combined to a single command using the \ operator; it would decrease the amount of build layers and final image size.

USER root is probably redundant since Docker images are built as the root user unless instructed otherwise.


You're right, it seems I didn't need USER root :)

How would I combine my RUN statements? You're saying I can combine every single RUN statement that I'm using in my docker file, or just reduce the total number of RUN statements?

You can combine most of them simply with newlines and separating commands with some logical operator.

For example:

RUN apt-get update && apt-get install -y \
    aufs-tools \
    automake \
    build-essential \
    curl \
    dpkg-sig \
    libcap-dev \
    libsqlite3-dev \
    mercurial \
    reprepro \
    ruby1.9.1 \
    ruby1.9.1-dev \
    s3cmd=1.1.* \
 && rm -rf /var/lib/apt/lists/*

This makes the file more readable and reduces the layers. While you're here check the official best practices for writing Dockerfiles:


Is it a good idea to run a database inside a docker container ? I have read it several places that docker containers should be stateless.


Docker has the ability to latch on to persistent volumes:

At work we use cloudstor to reference a persistent EFS volume for assets. The DB is just an RDS connection.

As to whether or not it's good practice though I'm not sure.


Running a database in a container is fine, as long as it doesn't have heavy load (e.g., tens of thousands of concurrent users). There are issues with networking and resource contention at high usage levels.

For most use cases, though, it's fine to run a database in a container, as long as you have the data stored in a volume mount.


I use a combination of nginx and devilbox/php-fpm. Although these 2 services are tightly dependent on each other, keeping them decoupled gives me the flexibility of swapping out the web server or the PHP version, and I'm not dependent on a single project releasing their latest version of web server + php combo.

For development, I mount my local folder onto the containers as a volume.

For production, I ship my containers with the source code, so this is a combination of COPY <my-folder> <target-folder-in-container> and VOLUME ["/var/www/my-container"] in my PHP-FPM Dockerfile so that I expose that volume to be shared with my nginx container. This is so that I only have one copy of the source code and it is not duplicated between containers (reduces filesize by 1/2).


Hi @mstrsobserver , I've got a little experience in dockerizing a PHP app. I've put a distributed PHP app in production on openshift (based on k8s) and we choose to split each "microservice" in 2 containers to respect the docker philosophy (one process per container):

  • A web container with Apache and static files
  • A PHP-FPM container

Even if it's working, I have the feeling that it's not optimal as the TCP connection between Apache and PHP-FPM is not kept alive.
Moreover, if you need a session affinity, be aware that it's not possible on k8s if you have a depth greater than 1 in your container architecture.

In the end, I'd suggest to put the web server (Apache or Nginx) and the PHP server in the same container.

As an alternative, you can also get rid of the Web server by using a standalone library like swoole.


I'm very much a beginner with Docker so take my input with that in mind. I think the benefits of using option 1 would be that you're isolating your PHP configurations and such into a single container instead of coupling multiple containers/other services like in option 3.


We are using instead of NGINX+FPM. As a result we have single Docker container with single application bound to standard 80 port.


Hmm ... perhaps this is the answer for the Kubernetes-specific scenario. What do you think? I like the fact you don't need to build two separate images with the same code.

code of conduct - report abuse