You can now watch this article as a video:
Video link OUTDATED
Edit 27.11.2021: Since Phoenix 1.6.0 release removed Webpack, Npm etc. and moved to Esbuild, the article is now updated accordingly.
Edit 25.12.2021: Change endpoint ip which works in Docker environment.
I've been following Elixir, Phoenix and Docker for a two years now. I Started with small demos and experiments. Back then configuring Elixir and Phoenix to Docker environment was a pain, especially the configuration for runtime. There was at least a 20 different ways and tutorials how to configure Elixir and Phoenix for Docker and Docker-compose. Each of those tutorials worked out, but some were outdated or too complicated.
Today I had some spare time to deep dive again into this problem and configure proper hot-reloaded local development environment for Elixir + Phoenix using Docker and Docker-compose.
Prerequisite
In this article I'm not going to go through installing these tools, but below is a list of what is needed.
- Docker
- Docker-compose
- Elixir (I used latest 1.10.4)
- Phoenix framework (I used latest 1.5.4)
Creating Phoenix application
Once you have everything installed you're ready to generate new Phoenix project by executing:
$ mix phx.new app_name
There are a few flags to pass along this command, but this command covers everything we need (Ecto, webpack etc). More on phx.new documentation.
Deps
During the project bootstrap mix
will execute command deps.get & deps.compile
to fetch and compile needed dependencies
This is problematic since we want to run our program/web-server in container, not on bare host system. In order to run Elixir application it needs to be compiled in same system architecture where the release binary will be run on. For now, it's safe to remove deps
folder from the project structure.
Now lets dive into setting up our Docker configuration for our newly created application!
Docker and Docker-compose
In this section we will be creating following files:
- Dockerfile
- docker-compose.yml
- .env
Dockerfile
is used to build Docker image which holds our application, dependencies and needed tools.
docker-compose.yml
- yaml-markup file to define our services, volumes etc.
.env
- Holds our environment variables for the application.
Dockerfile
Below you can see the contents of our Dockerfile.
FROM bitwalker/alpine-elixir-phoenix:latest
WORKDIR /app
COPY mix.exs .
COPY mix.lock .
CMD mix deps.get && mix phx.server
On the first line we define our base image which is used to build our image on to. As you can see, we're using bitwalker/alpine-elixir-phoenix
which holds Elixir, Phoenix and other needed tools for the application.
On the next few lines we're defining our working directory, copying files and making one new directory.
The CMD
is the line which is executed when we fire up our container. There is two commands for fetching deps and firing up our server.
docker-compose.yml
version: '3.6'
services:
db:
environment:
PGDATA: /var/lib/postgresql/data/pgdata
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_HOST_AUTH_METHOD: trust
image: 'postgres:11-alpine'
restart: always
volumes:
- 'pgdata:/var/lib/postgresql/data'
web:
build: .
depends_on:
- db
environment:
MIX_ENV: dev
env_file:
- .env
ports:
- '4000:4000'
volumes:
- .:/app
volumes:
pgdata:
Here we defined our docker-compose script version, needed services and volumes to be created.
As you can see, there is two services(containers) being created db
and web
. db
is our database container which is used by web
-container. On the image-property we define our image, which is postgres:11-alpine
. You could use newer version, for example PostgreSQL 12. It really depends what version you are going to use in production environment. I recommend to use same versions across the environments to minimize infrastructure related problems.
More explanation of the properties can be found from docker-compose documentation
.env
DATABASE_URL=postgres://postgres:postgres@db:5432/myapp_dev
For now there is only our environment variable for database URL. Later on when your application grows you will be storing your application environment variables in this file.
Fire it up
First we need to build our Docker image:
$ docker-compose build
If you did everything correctly, the image should be built relatively quick. Now we have our image ready, but first we need to configure our application to use our environment variable for database URL, so lets do that.
Navigate to config/dev.exs
and open it up. Change the content for your database from this:
# Configure your database
config :myapp, Myapp.Repo,
username: "postgres",
password: "postgres",
database: "myapp_dev",
hostname: "localhost",
show_sensitive_data_on_connection_error: true,
pool_size: 10
To this:
# Configure your database
config :myapp, Myapp.Repo,
url: System.get_env("DATABASE_URL"),
show_sensitive_data_on_connection_error: true,
pool_size: 10
NOTE: change endpoint IP to 0.0.0.0!
As you can see, we are fetching our database URL for database connection from the container environment.
Now we should be ready for our application start-up:
$ docker-compose up
It checks the deps, compiles deps & source files, but wait a second...
[error] Postgrex.Protocol (#PID<0.3932.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "myapp_dev" does not exist
We don't have a database where to connect! Lets fix that:
$ docker-compose run web mix ecto.create
After running this you should be greeted with a pleasant one line message which tells you that the database for YourApp.Repo has been created! Wonderful!
Note that you can execute any mix commands inside the container since it has mix tooling available. You can run migrations, seeds, destroy the database and set it up again for a clean start etc.
Now execute:
$ docker-compose up
The application should start up with database connection and be ready for development! Navigate to localhost:4000
and now if you make changes to the source files, the changes will be updated to the server, so no need for manual restart of the containers!
Conclusion
This is a pretty simplistic setup with minimal amount of files to support/configure.
I'll do a follow-up post where we will deploy Elixir application to AWS. It will include configuring the application for production environment, Terraform infrastructure and little bit of CI magic.
Thanks for reading, hope you liked it! 🙂
Top comments (19)
Thanks for this - first post I was able to get through start to finish without having to change or debug something. I omitted all the node & asset stuff, because I'm making an API for use with an Elm front end, but it was easy and it worked. Looking forward to more about deploying.
(Still not so sure how Docker is supposed to make my life that much better, though.)
Glad this helped you out!
The main point of running your application locally on Docker is the container environment it offers. You're running the application in the same container environment locally and perhaps on a cloud service provider infrastructure.
Also, it makes the database setup easy, you don't have to worry about starting up your local database on host system, configure it, update it etc. Third point being that, lets say that you're integrating your application to some other app and they have development Docker image available, you can just spin up this development image locally alongside of your application. This makes debugging even more easier if you face up problems. This is nice especially if there is some sort of microservice development going on.
I have a question about your workflow (or the canonical docker-based workflow) - are you supposed to run tests in a different docker container? Or just locally w/ no container? Or is there a way to change the env to TEST for this container? I like to do red-green-refactor TDD, so quick & easy tests are a big thing for me. I got my tests running locally, but I assume that's not optimal because my local postgres and erlang versions (and db credentials) are different than what is in the container.
Good question! I'm running my tests inside the local development container with command
docker-compose run -e MIX_ENV=test web mix test
. That command will replace theMIX_ENV
environment variable with value "test", so when themix test
is executed the mix environment is set to test, not dev.About the database url, the smartest way to use environment defined database URL is to use wildcard character in the database name, for example,
postgres://postgres:postgres@db:5432/myapp_?
. This way in the config files we can read that URL from container environment, replace the?
with our actual environment respectively (dev, test, even prod). By going this, you will always have separate databases locally for development and testing, and the development data does not affect test cases.Is that wildcard substitution something I do manually w/ String.replace or interpolation, or is there something built-in to the config function that I am unaware of?
So far the only thing I got to work was:
# .env
DATABASE_URL=postgres://postgres:postgres@db:5432/my_app_
# dev.exs
database_url = "#{System.get_env("DATABASE_URL")}#{Mix.env()}"
That
Mix.env()
is one way to achieve that. Personally I would still useString.replace/3
, since Elixir 1.9 introduced a new way to configure your application without Mix dependency in your config-files.I would do it this way:
test.exs
.env
You should use either 2 containers, One for frontend one for backend or make use of multistage builds.
The problem is that nobody wants deploy the front-end code with all its node_modules folder. You have to build the front-end assets for prod environment and make sure the containers are tiny as possible.
Have a look on "staged" builds on docker docs. You can also use one docker file and reference the stages in docker compose. E.g. stage: development for prod you use production stage.
What you said is valid, but for deployments to cloud infrastructure (dev, staging, production). On the up-coming article I'll be doing a multi-stage build with alpine images to keep the end result pretty tiny with only including some tooling required by BEAM and the application binary itself.
I just want to point out that this article is to help you to setup local development environment, not for deploying a small footprint image to cloud etc.
First of all, by following the best practices of docker. An image should not have frontend and backend dependencies. It does not matter if its for local development environment or for non local environments.
Also docker is built to solve one big problem. The problem is called "it works on my machine". By defining different containers for local and non local you missuse power of docker.
The solution is to use 2 containers. One for front-end dependencies, one for backend and make use of multistage builds.
Doesn't matter if local or non local.
If you want i could make s pull request on your GitHub repo to show you how it works.
What do you think?
Greetings
Well, if I use alpine based image on local development and on cloud deployments, how that does not solve the "it works on my machine" issue? I'm just curious.
I have one question, how I'm going to do rapid local development if I build the images locally with multistage Dockerfile? What happens to hot code reloading? Also, Elixir with Phoenix framework is SSR, so there is no separate frontend and backend as e.g. in Node and React. Nevertheless, you can separate the static assets and other browser rendered stuff and backend to own containers in cloud deployments, but on local environment I don't see the real benefit of it.
I opened the repo, you should be able to make a merge/pull request to it. You can find the link below, don't mind the naming of the repo. I'm waiting for MR!
gitlab.com/hlappa/url-shortener/
I ended up finding this here: evolvingdev.io/phoenix-local-devel... ..
The trick is adding this:
to your webpack.config.js.
Great article, but in my case I need a specific Ubuntu-bionic, erlang/OTP 22 and Elixir 1.10.2 to be on par with the production server. From which docker container image should I start building my image? I've found hexpm/elixir:1.10.2-erlang-22.2-ubuntu-bionic-20200219 but it seems that it lacks of many dev utils which are good to have on a development container.
Is it just plain Elixir application or is there also Phoenix involved?
If it's just a plain Elixir application, I would just go with basic ubuntu image and build the development container from it, in your case it would be the bionic release. If you have Phoenix involved, I would just use the Bitwalker's image which is used in this article. For production deployments, you can use different image and package the application to Ubuntu based image. This little bit breaks the principle that you have the same runtime in every environment (local, development, staging, production).
Even though you can also run Phoenix application in Ubuntu based image, but you need to install all the dependencies related to Phoenix if you are using Ubuntu as your base image.
But even in the second case if you decide to use Bitwalker's image on local and Ubuntu on deployments, you would have the same runtime in development, staging and production, and bugs related to environment issues could be spotted early on when testing in development or staging.
What comes to the dev utils, if I were you, I would just install needed tools in to the docker image. It will take a little bit longer to build local development image and run it in container.
Hey, Aleksi, thanks for the reply! It's a phoenix project which runs on a VPS (Ubuntu bionic, Erlang/OTP 22, Elixir 1.10.2) and I want to have a development environment as close to the production as I can, at least OS and Erlang version and Elixir version.
It seems that I'll need to build my docker container from Ubuntu image...
in that case, yes. It is a bit heavy compared to alpine based images, but totally doable!
Is this working? I have been trying for days and cannot access the localhost by any means. The weird things are:
Change the
127, 0, 0, 1
to0, 0, 0, 0
in theconfig/dev.exs
fileYou sir are a real hero. Thanks.
Seems that you already solved the problem. I updated the article accordingly. :)