DEV Community

Cover image for Dockerizing a Simple Rails Application for Deployment
crespire
crespire

Posted on • Updated on

Dockerizing a Simple Rails Application for Deployment

New to docker and docker compose? It can be a lot to figure out, and while there are lot of great resources out there (at least 3 different articles on how to dockerize an existing Rails application!), many of them are fairly tearse or assume a very high level of comfort.

For me, I knew the high level why behind docker, and even conceptually how it worked, but when I got into the weeds, I quickly got lost. So, dear reader, I thought I'd try to take a stab about explaining how to dockerize a Rails app, trying to fill in the gaps from the resources I was able to take in plus a lot of helpful guidance from various programming communities (Shoutout to Dave Kimura on Rails Link slack!).

What is docker?

Docker is a way to containerize applications so that they are easy and portable to run or deploy. The basic idea is that you can build an image based on some application code, which is then the basis for the container that later runs the code.

The Dockerfile is the first thing you'll likely come across when trying to containerize an application. Docker has an excellent documentation site that goes over the details of the many options available. But what made the Dockerfile intelligible to me was to think of it as the compile step. The Dockerfile, in essence, is the recipe for how to assemble and set up your image, which is then run inside of a container. So, thinking about the Dockerfile as the recipe that forms the basis of your image where you can define things like environment variables, what sorts of software you should also have available in the container when it's running, etc. All of that is defined in your Dockerfile.

One of the things I had trouble with when trying to figure out docker was how I should set up the container after it's spun up. Some of the tasks I wanted to accomplish (like creating and migrating the database for a Rails app, for example) seemed to belong to the "setup" stage, and so I initially tried to put them in the Dockerfile. This was not the right way to go about it, though I was somewhat close. You see, the trouble is when you are building an image, the image under construction doesn't (and shouldn't!) have access to any other image. That means, you don't really have access to the database instance, as it isn't running! Remember, this is because the Dockerfile is the recipe for building the image, and can be thought of like compile time. It's not really compiling in the classical sense, but it's somewhat analogous. In that case, then, something like setting up the database could be thought of as a run time concern. It requires that the database is available before we attempt to use it.

Well, Docker does provide a handy ENTRYPOINT option in the Dockerfile and that is one of the ways you can solve this compile/run time, but both are setup tasks kind of issue. There are better tutorials out there than what I can offer, but all you need to do is make sure to include an entry point script inside the image, make sure it has execution privileges, and you can put setup commands in there. We'll re-visit ENTRYPOINT in depth later in this article.

So, that's the theory. Let's talk about how that applies to an actual Rails app.

Show me the Rails!

Let's talk a little bit about what sort of app we're going to be setting up. The application I have in mind is a Rails 7 monolith that uses a PostgreSQL database to store data. The application that serves as the sample for this post won't include redis. As I said, it's a simple Rails app.

So, with that said, let's say we have a Rails app that fits the bill. You can run rails s and puma dutifully fires up in development mode, and you can visit the app at localhost:3000 and it talks to a PostgreSQL database and everything works as you expect - great! Let's see if we can dockerize it!

So, how do we go about containerizing this application? The first step is to set up a Dockerfile which describes the recipe to make the entree, as it were.

Dockerfile

Here is a sample file from my simple application which we will talk through step by step after:

# Specify base image
FROM ruby:3.0.2

# Add application code
ADD . /rails-app
WORKDIR /rails-app

# Install utilities
RUN apt-get update
RUN apt-get -y install nano

# Install dependencies
RUN bundle install

# Precompile assets - only required for non-API apps
RUN rake assets:precompile

# Set up env
ENV RAILS_ENV production
ENV RAILS_SERVE_STATIC_FILES true
ENV RAILS_LOG_TO_STDOUT true

COPY ./bin/entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]

# Expose port
EXPOSE 3000

# Run server when container starts
CMD ["rails", "server", "-b", "0.0.0.0"]
Enter fullscreen mode Exit fullscreen mode

Whew, that was a lot. Okay, so what does this file mean?? Let's break it down.

FROM ruby:3.0.2
Enter fullscreen mode Exit fullscreen mode

The first line is simple, it specifies the base image from which we build our final image. Here, because the application runs on Ruby 3.0.2, we specify that ruby version as a base image. These base images are maintained by folks much smarter than I, and you can check out all of the details on Docker Hub, which hosts these images.

ADD . /rails-app
WORKDIR /rails-app
Enter fullscreen mode Exit fullscreen mode

These two lines work in concert to set the container up. The first line ADD copies files from your local file system (notice the . which means "this directory") to the container's file system: /rails-app. After we've copied our code over, we set the container's working directory to what we just set up: WORKDIR /rails-app. Right, simple so far!

The next two lines are optional, but I find value in having nano inside the container so I can edit stuff if required.

RUN apt-get update
RUN apt-get -y install nano
Enter fullscreen mode Exit fullscreen mode

Basically, we run these two shell commands to install the nano command line editor inside the container.

The next two lines deal with our Rails app specifically, but we just run the two commands as if we were in the working directory of the application on container:

RUN bundle install
RUN rake assets:precompile
Enter fullscreen mode Exit fullscreen mode

The commands first install the gems we've specified in our Gemfile which was copied from our host directory, and then precompiles assets. Note that this line is only required if you have assets in your Rails app (CSS, Javascript, etc). For API only applications, this line should be commented out.

The next set of lines set up a few environment variables we want to use and are pretty straight forward.

ENV RAILS_ENV production
ENV RAILS_SERVE_STATIC_FILES true
ENV RAILS_LOG_TO_STDOUT true
Enter fullscreen mode Exit fullscreen mode

One note is that it is typically a good idea when using docker to log to a container's stdout because that enables us to use logging aggregation drivers. It's a bit more of an advanced usage, but docker offers us ways to easily access a container's stdout and having logs go there mean we can use built-in docker methods to handle our logging in concert with the other containers.

The next set of lines are interesting, and took me a bit to understand. Here they are again:

COPY ./bin/entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
Enter fullscreen mode Exit fullscreen mode

And for completeness sake, here is the entrypoint.sh that I am using:

#!/bin/bash
# Exit on any error
set -e

# Remove a potentially pre-existing server.pid for Rails.
rm -f ./tmp/pids/server.pid

# Make sure db is ready to go
# Adding '2>/dev/null' sends output to nowhere in the case of an error and the
# error code also triggers the bash OR to run db:setup
bundle exec rails db:migrate 2>/dev/null || bundle exec rails db:setup
bundle exec rails ds:update_clients

# Then exec the container's main process (CMD in the Dockerfile).
exec "$@"
Enter fullscreen mode Exit fullscreen mode

So, taken together, the Dockerfile lines copy the script from the current project directory, and copies it to the container's /usr/bin/ folder, and then sets the executable permission on it with RUN chmod +x /usr/bin/entrypoint.sh so that the script is executable when the container is started. As the name implies, this script then serves as the entry into our container when it's running. That is to say, this script will run whenever we enter the container to run a command, or when it it is started up.

Remember earlier how I had trouble setting up the database inside the Dockerfile and kept failing? This is how you can overcome that. We set up this "additional instructions" inside the recipe, but they only make sense once we run the container. It's like serving gravy on the side with a meal with instructions. If we put the gravy on before serving it, the whole thing might get too soggy. Or maybe some guests want more or less gravy. So we put it on the side and let the guest know that they can use as much or as little gravy as they want when the time comes to eat. The entrypoint allows us to provide additional instructions that get executed when the image is run as a container.

NOTE: The docker ENTRYPOINT configuration is actually much more complicated than I have alluded to here. The goal of this section (and the entire article) is not a deep dive into docker, but intended to be a first brush with docker to make it intelligible for those dipping their toes in. I know my explanation of ENTRYPOINT in particular likely falls short of explaining it in depth. That being said, the goal here is to get someone from zero knowledge to 1 containerized application, not full docker mastery.

The contents of the entrypoint.sh is pretty straight forward and is annotated for your learning.

The next line simply tells docker that traffic on port 3000 of the container should be forwarded inside the container (so our Rails server can serve the request).

EXPOSE 3000
Enter fullscreen mode Exit fullscreen mode

Finally, the last line!

CMD ["rails", "server", "-b", "0.0.0.0"]
Enter fullscreen mode Exit fullscreen mode

This is what is run when the container is spun up, and it's just a regular old Rails command with some parameters.

With that, we have the Rails app mostly in order as far as Docker is concerned. Often times the resources I initially pursued would then jump right into docker compose, and that is indeed what we'll be doing in a beat, but the one thing that was missing for me was why. Looking back, we can see that our Rails Dockerfile was mostly concerned with our Rails app. Up until then, we have just assumed we'll have a database available. And when we are in our local development environment, that's usually the case. Your machine probably has a PostgreSQL server service running the background and all you have to worry about is setting up the Rails app.

And yet, the Rails application is useless without its PostgreSQL companion. Remember how I said previously that the Dockerfile is like the recipe for an image that will be run later? It turns out that there are recipes out there for PostgreSQL as well. What's the relevance for us? Well, if we set up our Rails recipe, and we can borrow a PostgreSQL recipe, we should be able to put them together to make them work in concert. And that's what docker compose does.

By way of analogy, a Dockerfile is like a recipe for a particular dish, while using docker compose and its docker-compose.yml is like setting the menu with its courses and their order.

Docker Compose

Okay, so what's up with this docker compose situation? Well, if we think about each section of our application as a container, then we have already successfully set up our Rails container. But as I mentioned before, we need to have a database for the Rails application to be of any use!

So, as I said before, if the Dockerfile is your recipe for each particular menu item, then the docker-compose.yml is like your menu for the evening. It specifies what entrees are available, and how they relate to each other.

Let me show you the docker-compose.yml file I ended up setting up and we can go through it line by line to understand the big picture.

version: '3.8'
# Logging config
x-logging:
  &logging
  driver: "local"
  options:
    max-size: "5m"
    max-file: "3"

services:
  db:
    image: postgres
    env_file:
      - .env
    volumes:
      - postgres:/var/lib/postgresql/data
    restart: always
    healthcheck:
      test: pg_isready -U postgres -h 127.0.0.1
      interval: 5s
    logging: *logging
  web:
    build: .
    restart: always
    ports:
      - "3000:3000"
    env_file:
      - .env
    environment:
      DATABASE_URL: ${DATABASE_URL}
    depends_on:
      - db
    logging: *logging

volumes:
  postgres:
Enter fullscreen mode Exit fullscreen mode

Alright, again, this might be information overload but we'll step through it together.

The first line is straightforward:

version: '3.8'
Enter fullscreen mode Exit fullscreen mode

This just defines the version of Docker Compose that this file uses. It is a shorthand that maps to a particular version of Docker engine, so that what you write is interpreted correctly. As of this writing (Jan 8, 2023), the current latest version you can specify is "3.8" which is what we've used.

We are actually going to skip the x-logging section and come back to later, because it's strictly necessary for getting an app up and running.

Okay, onto the meat (pun intended) of the file. Let's go over the services section.

db service

Because this is a regular YML file, we'll examine the contents of the services section's sub-items in turn.

  db:
    image: postgres
    env_file:
      - .env
    volumes:
      - postgres:/var/lib/postgresql/data
    restart: always
    healthcheck:
      test: pg_isready -U postgres -h 127.0.0.1
      interval: 5s
    logging: *logging
Enter fullscreen mode Exit fullscreen mode

Here, we define a service called db which, predictably, runs our database. Recall that we can borrow recipes from Docker Hub, and you'll see that we actually do that here:

image: postgres
Enter fullscreen mode Exit fullscreen mode

This line tells docker compose to borrow the postgres image and use it to set up the container. Because we don't need anything custom with our PostgreSQL service, we can just use the pre-cooked image.

The next line includes a little bit of configuration via a .env file:

env_file:
  - .env
Enter fullscreen mode Exit fullscreen mode

This file actually makes a reprise later on in our Rails section, but what is important to know here is that by using a .env file we don't store any important information inside our docker-compose.yml file.

There was one thing that confused me here that I want to call out. Because we are relying on the pre-cooked postgre image, we should make sure we consult the documentation about what environment variables that image expects in order to function properly. In this case, the postgre image expects there to be a POSTGRES_PASSWORD environment variable, which we have set in our .env file. The value can be arbitrary, but must be included in the container's environment.

One alternative to using the env_file option is to use the environment option and set each variable manually. It looks like this:

environment:
  - POSTGRES_PASSWORD=somepassword
Enter fullscreen mode Exit fullscreen mode

Obviously, this method is a little more straightforward, but the trouble is that now we've included a plain text password for our database in our docker-compose.yml and docker-compose.yml itself is a version controlled file. Whether or not that's okay is up to you.

NOTE: You can also use both if you want. Docker compose has a specific precedence it will follow if there are multiple sources for environment variables, which can be useful to understand: https://docs.docker.com/compose/envvars-precedence/

The next line creates or uses a docker volume:

volumes:
  - postgres:/var/lib/postgresql/data
Enter fullscreen mode Exit fullscreen mode

This line sets up a named volume postgres which is then mounted inside the container at the path provided /var/lib/postgresql/data. Docker will then persist this data to the host via its own magic. For example, the data is persisted to /var/lib/docker/volumes/rails_app/_data on my local install because that's the application's name. Note that we've named the volume postgres as we will use it later.

The next few lines define a few features that are not strictly necessary, so we'll group them together and run through them as a block.

restart: always
healthcheck:
  test: pg_isready -U postgres -h 127.0.0.1
  interval: 5s
logging: *logging
Enter fullscreen mode Exit fullscreen mode

Restart specifies the conditions under which a container should automatically restart, here we specify always. There are some other values as well. With healthcheck, we define a test and how often it should be run to report on the health of the service. Because we are using PostgreSQL, it comes with a command line option pg_isready which we run as the postgres user on the local container. Finally, we set up log rotation via the the "extension fields" feature (which we skipped earlier). We'll talk about the extension field x-logging in the last section.

web service

Okay, that was a quick tour of setting up our db service. Let's move on to the other service we've specified, web. Below is a recap of the web subsection under services:

web:
  build: .
  restart: always
  ports:
    - "3000:3000"
  env_file:
    - .env
  depends_on:
    - db
  logging: *logging
Enter fullscreen mode Exit fullscreen mode

Here, we have most of the same commands except three, which are new. Let's examine them.

build: .
Enter fullscreen mode Exit fullscreen mode

For our web service, instead of using a pre-cooked image from Docker Hub, we've specified that we should build an image from the current directory. Which means that this docker-compose.yml will rely on the Dockerfile to generate an image of our Rails app for us.

ports:
  - "3000:3000"
Enter fullscreen mode Exit fullscreen mode

This line maps the host post to our container's port. If you'll recall, we set our Rails app to listen on port 3000 in our Dockerfile so we just make sure that the port is exposed to the host from our container.

Finally, the last new command:

depends_on:
  - db
Enter fullscreen mode Exit fullscreen mode

This line does what it says on the tin, it tells Docker compose that we need our db service to be available before trying to spin up this service.

That about covers the services section of the docker-compose.yml there's just one more section to cover:

volumes:
  postgres:
Enter fullscreen mode Exit fullscreen mode

Remember how we defined this named volume earlier as part of our db service? Listing it again here at the top level tells docker-compose to make the volume available for all services, so that our database can use it.

Putting it all together

Finally, we have all of our bits and pieces set up in order to use docker and docker compose to deploy this application.

The final piece that's missing is the logging section. This section uses "extension fields" which start with a x- that Docker will actually ignore. The reason for this is because we can then use YML anchors to then share configuration across a number of different services.

In this example, we define a log rotation setup that we want to use across our db and web service. Then we use YML anchors to insert that configuration into each service.

The last bit we have to sort out is how our secrets are configured. We already touched on POSTGRES_PASSWORD required by the db service, but actually, we also require some secrets for the Rails application.

Chiefly, we require that our Rails application can access config/credentials.enc.yml which is a version controlled file. Typically, inside that file is the secret_key_base which is used as the basis for all other encryption for a Rails app.

If you're starting with a Rails app you've already created, the easy solution is to find the config/master.key (or an environment specific .key file if you'd prefer) and stick that value into your .env so that Rails can read the file required. The rule is: whatever key encrypted your config/credentials.enc.yml needs to be available inside your container in order to run the application.

So make sure to add RAILS_MASTER_KEY to your .env file so that Rails can decrypt your credentials.enc.yml!

Finally, it is a good idea to set up your Rails database configuration to also use an environment variable for connecting to the database. One easy way to do this is to set a DATABASE_URL environment variable that you can embed the username and password into. This way, you don't have to specify separate variables for user and password.

The reason for this is that both your containers will run in a virtual network and be able to talk to each other. Because we previously set up a POSTGRES_PASSWORD as required by the postgres image, we can set up a DATABASE_URL using that password as well for the Rails app inside the web service to communicate with the db service.

The format for this should be: postgres://postgres:<chosen_password>@db:5432

Then you can add url: <%= ENV['DATABASE_URL'] %> to the default section of your database.yml and not have to define separate username and password variables.

Because our containers run on the same virtual network, we can simply refer them by their service name and Docker will make sure they can resolve internally.

Okay! Now that we have everything set up, we can see if everything works!

In your command line, you can run docker compose up -d and Docker should build and then run your containers. The -d tells docker to run in detached mode, which will run the containers in the background.

Here are some other commands to be aware of:

  • docker compose stop will stop the containers related to this application. You have to make sure you're in the project directory where docker-compose.yml lives for this to work.
  • docker compose run <service> will pull up logs for the chosen service. There are also options to get live logging, so check it out.
  • docker compose build will re-build the images that are used to run the containers. Make sure that you use this command if you make any source changes to your Rails application, otherwise the changes will not be captured in the images and the app behaviour won't change as you expect.

Running this shebang remotely

Okay, now we have this docker compose setup working on our local development machine. How do we get this all up into the cloud or another server?

There are a few approaches. The first is the most straight forward but potentially time consuming:

  1. Either via git or a command like scp, copy the repository over to the target host machine, install docker and and docker compose on the target host, and run the command via ssh. With this method, any changes made to code must be copied to the target host machine, and the images re-built and re-deployed. For a project which changes often, this may not be an ideal approach.
  2. Another option is to set up a docker context. This means you run your docker commands on a defined docker daemon that may not be local to your current environment.

Docker has a great blog post on this which is extremely straight forward: https://www.docker.com/blog/how-to-deploy-on-remote-docker-hosts-with-docker-compose/

I hope this helps, feel free to comment any questions and I will do my best to reply! You can also use docker to set up development environments, so stay tuned for that!

Latest comments (1)

Collapse
 
banstein profile image
Islam Bahnas

Thanks for simplicity