loading...
Cover image for Development environment for Elixir + Phoenix with Docker and Docker-compose

Development environment for Elixir + Phoenix with Docker and Docker-compose

hlappa profile image Aleksi Holappa Updated on ・6 min read

I've been following Elixir, Phoenix and Docker for a two years now. I Started with small demos and experiments. Back then configuring Elixir and Phoenix to Docker environment was a pain, especially the configuration for runtime. There was at least a 20 different ways and tutorials how to configure Elixir and Phoenix for Docker and Docker-compose. Each of those tutorials worked out, but some were outdated or too complicated.

Today I had some spare time to deep dive again into this problem and configure proper hot-reloaded local development environment for Elixir + Phoenix using Docker and Docker-compose.

Prerequisite

In this article I'm not going to go through installing these tools, but below is a list of what is needed.

  • Docker
  • Docker-compose
  • Elixir (I used latest 1.10.4)
  • Phoenix framework (I used latest 1.5.4)
  • Hex

Creating Phoenix application

Once you have everything installed you're ready to generate new Phoenix project by executing:

$ mix phx.new "your application name"

There are a few flags to pass along this command, but this command covers everything we need (Ecto, webpack etc). More on phx.new documentation.

Deps and node_modules

During the project bootstrap mix will execute command deps.get & deps.compile to fetch and compile needed dependencies and run npm install to fetch needed Javascript packages.

This is problematic since we want to run our program/web-server in container, not on bare host system. In order to run Elixir application it needs to be compiled in same system architecture where the release binary will be run on. For now, it's safe to remove deps and node_modules folders from the project structure (node_modules-folder lives inside assets-folder).

Now lets dive into setting up our Docker configuration for our newly created application!

Docker and Docker-compose

In this section we will be creating following files:

  • Dockerfile
  • docker-compose.yml
  • .env

Dockerfile is used to build Docker image which holds our application, dependencies and needed tools.

docker-compose.yml - yaml-markup file to define our services, volumes etc.

.env - Holds our environment variables for the application.

Dockerfile

Below you can see the contents of our Dockerfile.

FROM bitwalker/alpine-elixir-phoenix:latest

WORKDIR /app

COPY mix.exs .
COPY mix.lock .

RUN mkdir assets

COPY assets/package.json assets
COPY assets/package-lock.json assets

CMD mix deps.get && cd assets && npm install && cd .. && mix phx.server

On the first line we define our base image which is used to build our image on to. As you can see, we're using bitwalker/alpine-elixir-phoenix which holds Elixir, Phoenix and other needed tools for the application.

On the next few lines we're defining our working directory, copying files and making one new directory.

The CMD is the line which is executed when we fire up our container. There is few commands for fetching deps, installing npm packages and firing up our server. Those commands could be in .sh-script file, but I left it there to clarify what is being executed and in which order.

docker-compose.yml

version: '3.6'
services:
  db:
    environment:
      PGDATA: /var/lib/postgresql/data/pgdata
      POSTGRES_PASSWORD: postgres
      POSTGRES_USER: postgres
      POSTGRES_HOST_AUTH_METHOD: trust
    image: 'postgres:11-alpine'
    restart: always
    volumes:
      - 'pgdata:/var/lib/postgresql/data'
  web:
    build: .
    depends_on:
      - db
    environment:
      MIX_ENV: dev
    env_file:
      - .env
    ports:
      - '4000:4000'
    volumes:
      - .:/app
volumes:
  pgdata:

Here we defined our docker-compose script version, needed services and volumes to be created.

As you can see, there is two services(containers) being created db and web. db is our database container which is used by web-container. On the image-property we define our image, which is postgres:11-alpine. You could use newer version, for example PostgreSQL 12. It really depends what version you are going to use in production environment. I recommend to use same versions across the environments to minimize infrastructure related problems.

More explanation of the properties can be found from docker-compose documentation

.env

DATABASE_URL=postgres://postgres:postgres@db:5432/myapp_dev

For now there is only our environment variable for database URL. Later on when your application grows you will be storing your application environment variables in this file.

Fire it up

First we need to build our Docker image:

$ docker-compose build

If you did everything correctly, the image should be built relatively quick. Now we have our image ready, but first we need to configure our application to use our environment variable for database URL, so lets do that.

Navigate to config/dev.exs and open it up. Change the content from this:

use Mix.Config

# Configure your database
config :myapp, Myapp.Repo,
  username: "postgres",
  password: "postgres",
  database: "myapp_dev",
  hostname: "localhost",
  show_sensitive_data_on_connection_error: true,
  pool_size: 10

# For development, we disable any cache and enable
# debugging and code reloading.
#
# The watchers configuration can be used to run external
# watchers to your application. For example, we use it
# with webpack to recompile .js and .css sources.
config :myapp, MyappWeb.Endpoint,
  http: [port: 4000],
  debug_errors: true,
  code_reloader: true,
  check_origin: false,
  watchers: [
    node: [
      "node_modules/webpack/bin/webpack.js",
      "--mode",
      "development",
      "--watch-stdin",
      cd: Path.expand("../assets", __DIR__)
    ]
  ]

# ## SSL Support
#
# In order to use HTTPS in development, a self-signed
# certificate can be generated by running the following
# Mix task:
#
#     mix phx.gen.cert
#
# Note that this task requires Erlang/OTP 20 or later.
# Run `mix help phx.gen.cert` for more information.
#
# The `http:` config above can be replaced with:
#
#     https: [
#       port: 4001,
#       cipher_suite: :strong,
#       keyfile: "priv/cert/selfsigned_key.pem",
#       certfile: "priv/cert/selfsigned.pem"
#     ],
#
# If desired, both `http:` and `https:` keys can be
# configured to run both http and https servers on
# different ports.

# Watch static and templates for browser reloading.
config :myapp, MyappWeb.Endpoint,
  live_reload: [
    patterns: [
      ~r"priv/static/.*(js|css|png|jpeg|jpg|gif|svg)$",
      ~r"priv/gettext/.*(po)$",
      ~r"lib/myapp_web/(live|views)/.*(ex)$",
      ~r"lib/myapp_web/templates/.*(eex)$"
    ]
  ]

# Do not include metadata nor timestamps in development logs
config :logger, :console, format: "[$level] $message\n"

# Set a higher stacktrace during development. Avoid configuring such
# in production as building large stacktraces may be expensive.
config :phoenix, :stacktrace_depth, 20

# Initialize plugs at runtime for faster development compilation
config :phoenix, :plug_init_mode, :runtime

To this:

use Mix.Config

database_url = System.get_env("DATABASE_URL")

# Configure your database
config :myapp, Myapp.Repo,
  url: database_url,
  show_sensitive_data_on_connection_error: true,
  pool_size: 10

# For development, we disable any cache and enable
# debugging and code reloading.
#
# The watchers configuration can be used to run external
# watchers to your application. For example, we use it
# with webpack to recompile .js and .css sources.
config :myapp, MyappWeb.Endpoint,
  http: [port: 4000],
  debug_errors: true,
  code_reloader: true,
  check_origin: false,
  watchers: [
    node: [
      "node_modules/webpack/bin/webpack.js",
      "--mode",
      "development",
      "--watch-stdin",
      cd: Path.expand("../assets", __DIR__)
    ]
  ]

# Watch static and templates for browser reloading.
config :myapp, MyappWeb.Endpoint,
  live_reload: [
    patterns: [
      ~r"priv/static/.*(js|css|png|jpeg|jpg|gif|svg)$",
      ~r"priv/gettext/.*(po)$",
      ~r"lib/myapp_web/(live|views)/.*(ex)$",
      ~r"lib/myapp_web/templates/.*(eex)$"
    ]
  ]

# Do not include metadata nor timestamps in development logs
config :logger, :console, format: "[$level] $message\n"

# Set a higher stacktrace during development. Avoid configuring such
# in production as building large stacktraces may be expensive.
config :phoenix, :stacktrace_depth, 20

# Initialize plugs at runtime for faster development compilation
config :phoenix, :plug_init_mode, :runtime

As you can see, we deleted SSL section since we're not going to use SSL secured connection in our local development. The actual change you should be looking at is on top of the file. We are fetching our database URL for database connection from the container environment.

Now we should be ready for our application start-up:

$ docker-compose up

It checks the deps, installs npm packages and compiles deps & source files, but wait a second...

[error] Postgrex.Protocol (#PID<0.3932.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "myapp_dev" does not exist

We don't have a database where to connect! Lets fix that:

$ docker-compose run web mix ecto.create

After running this you should be greeted with a pleasant one line message which tells you that the database for YourApp.Repo has been created! Wonderful!

Note that you can execute any mix commands inside the container since it has mix tooling available. You can run migrations, seeds, destroy the database and set it up again for a clean start etc.

Now execute:

$ docker-compose up

The application should start up with database connection and be ready for development! Navigate to localhost:4000 and now if you make changes to the source files, the changes will be updated to the server, so no need for manual restart of the containers!

Conclusion

This is a pretty simplistic setup with minimal amount of files to support/configure.

I'll do a follow-up post where we will deploy Elixir application to AWS. It will include configuring the application for production environment, Terraform infrastructure and little bit of CI magic.

Thanks for reading, hope you liked it! 🙂

Discussion

pic
Editor guide
Collapse
nycplayer profile image
Matt Van Horn

Thanks for this - first post I was able to get through start to finish without having to change or debug something. I omitted all the node & asset stuff, because I'm making an API for use with an Elm front end, but it was easy and it worked. Looking forward to more about deploying.
(Still not so sure how Docker is supposed to make my life that much better, though.)

Collapse
hlappa profile image
Aleksi Holappa Author

Glad this helped you out!

The main point of running your application locally on Docker is the container environment it offers. You're running the application in the same container environment locally and perhaps on a cloud service provider infrastructure.

Also, it makes the database setup easy, you don't have to worry about starting up your local database on host system, configure it, update it etc. Third point being that, lets say that you're integrating your application to some other app and they have development Docker image available, you can just spin up this development image locally alongside of your application. This makes debugging even more easier if you face up problems. This is nice especially if there is some sort of microservice development going on.

Collapse
nycplayer profile image
Matt Van Horn

I have a question about your workflow (or the canonical docker-based workflow) - are you supposed to run tests in a different docker container? Or just locally w/ no container? Or is there a way to change the env to TEST for this container? I like to do red-green-refactor TDD, so quick & easy tests are a big thing for me. I got my tests running locally, but I assume that's not optimal because my local postgres and erlang versions (and db credentials) are different than what is in the container.

Thread Thread
hlappa profile image
Aleksi Holappa Author

Good question! I'm running my tests inside the local development container with command docker-compose run -e MIX_ENV=test web mix test. That command will replace the MIX_ENV environment variable with value "test", so when the mix test is executed the mix environment is set to test, not dev.

About the database url, the smartest way to use environment defined database URL is to use wildcard character in the database name, for example, postgres://postgres:postgres@db:5432/myapp_?. This way in the config files we can read that URL from container environment, replace the ? with our actual environment respectively (dev, test, even prod). By going this, you will always have separate databases locally for development and testing, and the development data does not affect test cases.

Thread Thread
nycplayer profile image
Matt Van Horn

Is that wildcard substitution something I do manually w/ String.replace or interpolation, or is there something built-in to the config function that I am unaware of?
So far the only thing I got to work was:
# .env
DATABASE_URL=postgres://postgres:postgres@db:5432/my_app_

# dev.exs
database_url = "#{System.get_env("DATABASE_URL")}#{Mix.env()}"

Thread Thread
hlappa profile image
Aleksi Holappa Author

That Mix.env() is one way to achieve that. Personally I would still use String.replace/3, since Elixir 1.9 introduced a new way to configure your application without Mix dependency in your config-files.

I would do it this way:

test.exs

database_url = System.get_env("DATABASE_URL")

config :myapp, Myapp.Repo,
  url: String.replace(database_url, "?", "test"),
  pool: Ecto.Adapters.SQL.Sandbox

.env

DATABASE_URL=postgres://postgres:postgres@db:5432/myapp__?
Collapse
robert197 profile image
Robert

You should use either 2 containers, One for frontend one for backend or make use of multistage builds.
The problem is that nobody wants deploy the front-end code with all its node_modules folder. You have to build the front-end assets for prod environment and make sure the containers are tiny as possible.
Have a look on "staged" builds on docker docs. You can also use one docker file and reference the stages in docker compose. E.g. stage: development for prod you use production stage.

Collapse
hlappa profile image
Aleksi Holappa Author

What you said is valid, but for deployments to cloud infrastructure (dev, staging, production). On the up-coming article I'll be doing a multi-stage build with alpine images to keep the end result pretty tiny with only including some tooling required by BEAM and the application binary itself.

I just want to point out that this article is to help you to setup local development environment, not for deploying a small footprint image to cloud etc.

Collapse
robert197 profile image
Robert

First of all, by following the best practices of docker. An image should not have frontend and backend dependencies. It does not matter if its for local development environment or for non local environments.
Also docker is built to solve one big problem. The problem is called "it works on my machine". By defining different containers for local and non local you missuse power of docker.
The solution is to use 2 containers. One for front-end dependencies, one for backend and make use of multistage builds.
Doesn't matter if local or non local.
If you want i could make s pull request on your GitHub repo to show you how it works.
What do you think?

Greetings

Thread Thread
hlappa profile image
Aleksi Holappa Author

Well, if I use alpine based image on local development and on cloud deployments, how that does not solve the "it works on my machine" issue? I'm just curious.

I have one question, how I'm going to do rapid local development if I build the images locally with multistage Dockerfile? What happens to hot code reloading? Also, Elixir with Phoenix framework is SSR, so there is no separate frontend and backend as e.g. in Node and React. Nevertheless, you can separate the static assets and other browser rendered stuff and backend to own containers in cloud deployments, but on local environment I don't see the real benefit of it.

I opened the repo, you should be able to make a merge/pull request to it. You can find the link below, don't mind the naming of the repo. I'm waiting for MR!

gitlab.com/hlappa/url-shortener/