DEV Community

Cover image for Running dev.to in a container
Chuck Ha
Chuck Ha

Posted on • Updated on

Running dev.to in a container

NEW POST, BETTER UX!

Original post below here

Dev.to is open source! Awesome! Getting the application running is relatively easy, so props to the dev.to team for good documentation and a clean rails project.

This article walks you through how to get a development version of dev.to running in a container.

The architecture

dev.to requires a web server and a database to run in development mode. It may require more services for production, but that's out of scope for now.

The database

The database needs to exist first as it is a dependency of the app. The database container and the app container will live in the same network so they can communicate over tcp.

The database state will live on disk on our host machine (my laptop in this case). I like to mount a data directory into the postgres container so that I have some persistence when my container restarts. Here is some optional reading that might make this next part a little more clear.

# create the data directory
mkdir data

# create the docker network that will be shared by db and app containers
docker network create devto

# launch postgres
#   `--rm` removes the container when it stops running. This just helps us clean up
#   `--network devto` connects this container to our created network
#   `--name db` names the container and is shown in the output of `docker ps`
#   `-e POSTGRES_PASSWORD=devto` sets an environment variable which will set the password we will use to connect to postgres
#   `-e POSTGRES_USER=devto` sets the env var which will define the user that will connect to postgres
#   `-e POSTGRES_DB=PracticalDeveloper_development` sets the env var that defines the default database to create
#   `-v $(PWD)/data:/var/lib/postgresql/data` mounts the data directory we created above into the postgres container where all the data will live
#   `postgres:10` runs postgres using the latest stable 10 release (example: v10.5)
docker run --rm --network devto --name db -e POSTGRES_PASSWORD=devto -e POSTGRES_USER=devto -e POSTGRES_DB=PracticalDeveloper_development -v $(PWD)/data:/var/lib/postgresql/data postgres:10

The web application

We have to modify the code a little bit to get this working with the following Dockerfile. Here are the changes I made:

  1. Add gem "tzinfo-data" to the Gemfile (I think this is ubuntu related, not 100% sure yet)
  2. Set url: <%= ENV['DATABASE_URL'] %> in the default database configuration in config/database.yaml
  3. Comment out host: localhost in the same file under the test configuration

After I finished this work I found out there is both an issue and a WIP pull request that already exist. The approach presented in this article is a first pass and needs clean up, but I tried to make it clear about what is going on.

FROM ubuntu:18.04

ADD . /root/dev.to
WORKDIR /root/dev.to/

# Set up to install ruby
RUN apt update && apt install -y autoconf bison build-essential libssl-dev libyaml-dev libreadline-dev zlib1g-dev libncurses5-dev libffi-dev libgdbm5 libgdbm-dev

# This uhh helps when you run the container in interactive mode
RUN echo 'export PATH=/root/.rbenv/bin:/root/.rbenv/shims:$PATH' >> ~/.bashrc

# install rbenv-installer
RUN apt install -y curl git && \
    export PATH=/root/.rbenv/bin:/root/.rbenv/shims:$PATH && \
    curl -fsSL https://github.com/rbenv/rbenv-installer/raw/master/bin/rbenv-installer | bash

# install rbenv
RUN export PATH=/root/.rbenv/bin:/root/.rbenv/shims:$PATH && \
    rbenv install && \
    echo 'eval "$(rbenv init -)"' >> ~/.bashrc

# Install gems and yarn
RUN export PATH=/root/.rbenv/bin:/root/.rbenv/shims:$PATH && \
    gem install bundler && \
    gem install foreman && \
    curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - && \
    echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list && \
    apt-get update &&\
    apt install -y yarn libpq-dev && \
    bundle install && \
    bin/yarn

Modify this command below with the correct Algolia keys/app id then build and run the docker image.

docker build . -t dev.to:latest

# setting the various timeouts to large numbers 10000 since the docker version of this app and database tend to be *extremely* slow.
# -p 3000:3000 exposes the port 3000 on the container to the host's port 3000. This lets us access our dev environment on our laptop at http://localhost:3000.
docker run -it --rm --network devto -p 3000:3000 -e RACK_TIMEOUT_WAIT_TIMEOUT=10000 -e RACK_TIMEOUT_SERVICE_TIMEOUT=10000 -e STATEMENT_TIMEOUT=10000 -e ALGOLIASEARCH_API_KEY=yourkey -e ALGOLIASEARCH_APPLICATION_ID=yourid -e ALGOLIASEARCH_SEARCH_ONLY_KEY=yourotherkey -e DATABASE_URL=postgresql://devto:devto@db:5432/PracticalDeveloper_development dev.to:latest /bin/bash

> bin/setup
> bin/rails server
...

Then open up http://localhost:3000 your laptop.

If you have trouble, please leave a comment and I'll update this post!

This is part 1 of a series, getting dev.to running on Kubernetes. Stay tuned for the next article!

Top comments (4)

Collapse
 
david_j_eddy profile image
David J Eddy

Question: Does the Ruby container need to have rbenv-* ? Would not setting $PATH to the project root be enough since the application is running in a container? I am not a Ruby person but am curious how it works.

Collapse
 
chuck_ha profile image
Chuck Ha

Two points here

  1. I'm installing ruby with rbenv so that if the dev.to repo ever updates the .ruby-version file this image will use the correct version of ruby.
  2. The reason I have to add .rbenv/bin and .rbenv/shim to the $PATH is that the rbenv-install will fail with an exit code > 0 if those do not exist in the $PATH. It looks funny to me that I have to modify the $PATH then install rbenv-installer, but this seems to be a quirk of rbenv-doctor which is run after the installation.

I'm hoping to clean up the Dockerfile as I write out this series of posts.

Collapse
 
david_j_eddy profile image
David J Eddy

Interesting, I am conflicted.

I like the forward thinking of not needing to modify an upstream source provider content, esp. due to a minor increment.

But on the other hand I though if any of the app files change (ie .ruby-version)
shouldn't the image be rebuilt and the dependent containers restarted?

The phrase "There are no solutions, only trade-offs" comes to mind.

Thread Thread
 
chuck_ha profile image
Chuck Ha

To your point I think there is a better solution than what I've provided in this article. I should be using a multistage build. If I were building dev.to on a container for production I'd probably have a build pipeline for each commit that would do something like:

  1. build a ruby image based off .ruby-version file
  2. build a dependency image from the ruby image off the gemfile/yarnfile
  3. build an image with compiled assets
  4. add in the app code

The only time all the build steps get run is when the ruby-version is updated, otherwise the build uses the cache (assuming docker is able to cache i.e. the build is not running in ephemeral infrastructure). In the normal case where the app code is changing but the dependencies are not only the last step is executed which should be relatively quick.