DEV Community

Cover image for Build and Host Containerized Micro-Services
Tobi Akanji
Tobi Akanji

Posted on • Updated on

Build and Host Containerized Micro-Services

In this tutorial, we would be deploying the main components of a containerised node app, as services on an AWS server.

We will go through three (3) major steps here, including:

  1. Containerization
  2. Docker-compose
  3. Proxying

Connecting To Your Server Via SSH

$ sudo ssh -i <path>/<to>/<.pem file> <user>@<id>.<region>.compute.amazonaws.com
Enter fullscreen mode Exit fullscreen mode

Tools

The minimum requirements to follow this tutorial are:

  1. CLI (e.g. Bash, Powershell…)
  2. Docker
  3. Docker Compose
  4. NGINX
  5. AWS-CLI

For general CLI activities, I use Git Bash.

Docker

Docker will be used to containerize our application, which in this case, is a microservice.

Install docker

Docker Compose

Docker Compose will be used to define how microservices for your application will relate.

  1. Install docker-compose

  2. Create a docker-compose.yml file a folder at your home (~) directory. E.g.

cd ~ && mkdir my_new_app && touch docker-compose.yml
Enter fullscreen mode Exit fullscreen mode

NGINX

NGINX will be used to define how the outside world will be able to relate to, and secure our application.

Install NGINX

Containerizing Your Microservices

Open your CLI tool and go into your app root directory.

$ cd <path/to/app/root/directory>
Enter fullscreen mode Exit fullscreen mode

You might want to confirm your current directory first, to guide on how to get to your root directory, run:

$ dir
Enter fullscreen mode Exit fullscreen mode

In your app root directory, create a Dockerfile named Dockerfile with no file extension.

$ touch Dockerfile
Enter fullscreen mode Exit fullscreen mode

Ensure that whatever you are using to create the file does not add an extension to the Dockerfile. You can confirm this by running the following command on your current directory:

$ ls
Enter fullscreen mode Exit fullscreen mode

Setting Up The Dockerfile

The minimum requirement for a Dockerfile is

FROM node:14.15-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
CMD ["npm", "start"]
Enter fullscreen mode Exit fullscreen mode

OR

To allow for further process automation e.g. database migration, the actual commands to be executed when the container starts running, will be in a shell script (.sh file) created in our app e.g. in the deploy.sh file below

#!/bin/sh
cd /app
npm run migrate:up
npm run start
Enter fullscreen mode Exit fullscreen mode

The Dockerfile will be composed similar to:

FROM node:14.15-alpine
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm install
COPY . .
RUN chmod +x deploy.sh
ENTRYPOINT ["./deploy.sh"]
Enter fullscreen mode Exit fullscreen mode
  • FROM: Defines our base image, that is, the foundation upon which we are building our application/image. This could be the programming language (e.g. python), runtime (e.g. node), OS (e.g. ubuntu) etc.
    The argument goes with the syntax <name>:<tag>.

  • WORKDIR: Specifies a folder in our docker image where we will place our work files. Usually you should place your work files (codebase files) in a folder, conventionally ./app.

  • COPY: Copy files and folders from our local machine to a folder in our docker image. The last argument indicates the docker image folder we are copying to.

  • RUN: We use this to run shell commands.

  • CMD: Here, we define the command we need to run the application.

  • ENTRYPOINT: This is a docker command that executes the exec command when the container begins to run.

The line RUN chmod +x deploy.sh, is used to switch the permission for the specified file deploy.sh, which will be used to run the bash script. Without switching the permission, it is most likely that the current user will not be able to run scripts in the container, on the server.

Note: a migrate:up script has been set in the package.json

The first line, #!/bin/bash defines the symbolic link, and is compulsory, so that the server knows what shell to symbolically link to.

Building The Image

On your CLI, still at your app's root directory, run:

$ docker build -t registry.tboyak.io/my_own_app:1 .
$ docker tag my_own_app:1 registry.tboyak.io/my_own_app:1
Enter fullscreen mode Exit fullscreen mode

...to simply build the image.

OR

$ docker run -p <port>:<port>
Enter fullscreen mode Exit fullscreen mode

...to build the image and spin-off a container.

Forking The Image For A Remote Docker Repo

$ docker tag my_own_app:1 registry.tboyak.io/my_own_app:1
Enter fullscreen mode Exit fullscreen mode

Pushing to Docker Hub

For our app to be accessible online, we would need to push the image of our application to Docker Hub. To do this, we run:

$ docker push registry.tboyak.io/my_own_app:1
Enter fullscreen mode Exit fullscreen mode

Setting Up The Docker Compose

The minimum requirements for a docker-compose.yml file with an app and a database is as setup below:

#docker-compose.yml

version: "3"
services:
  my_own_app:
    // build: ./
    image: registry.tboyak.io/my_own_app:1
    ports:
      - 80:8080
    environment:
      - PORT=8080
      - DB_CLIENT=pg
      - DB_HOST=localhost
      - DB_PORT=5432
      - DB_DATABASE=my_own_app
      - DB_USERNAME=postgres
      - DB_PASSWORD=password
    depends_on:
      - db
  db:
    image: postgres:13-alpine
    container_name: db
    ports:
      - 5432:5432
    environment:
      - POSTGRES_DB=my_own_app
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=password

## If using mysql
#  db:
#    image: mysql:latest
#    container_name: db
#    ports:
#      - 3306:3306
#    environment:
#       MYSQL_ROOT_PASSWORD: root_password
#       MYSQL_DATABASE: my_own_app
#       MYSQL_USER: wordpress
#       MYSQL_PASSWORD: password

# For MongoDB, there are further requirements that cannot be covered here
Enter fullscreen mode Exit fullscreen mode

Note that we're dealing with each main component as a service e.g. our app, and the database.

  • version: The version of docker-compose you intend to use. You can always check for the latest

  • services: The dictionary of microservices you need for your fully running application. In this example, we only need our app, and a database.

  • build: This indicates that we are building an image for our own application from our Dockerfile. It takes the value of the root directory of the app we intend to build, which is where the Dockerfile should be.

  • image: We indicate the name and tag of the image we intend to use for our app in this format [registry.username/]<name>:<tag>.

  • ports: The list of ports mappings to the service. This indicates the port we intend to expose to the outside world in order to have access to the internal running port of the services.
    The syntax reads <external port>:<internal port>.

  • environment: The list of environment variables for the associated service.

  • container_name: The default name we intend to give the container we spin-off from the built image.

  • depends_on: The list of microservices that the particular microservice depends on.

In the advent where your server size is to small to handle your RDBMS, use an AWS RDS (Relational Database Service) instead.

Connecting To RDS

  1. First of all, you will need your server to be authenticated with the RDS
$ aws rds generate-db-auth-token \
   --hostname <name>.<id>.<region>.rds.amazonaws.com \
   --port 3306 \
   --region <region> \
   --username <username>
Enter fullscreen mode Exit fullscreen mode
  1. Connect to a DB instance on the RDS. The access parameters to the DB will be the env for DB connection on your own app. That is:
  • DB_Host=...rds.amazonaws.com
  • DB_NAME=
  • DB_PORT=
  • DB_USERNAME=
  • DB_PASSWORD=

Running The Containers

On your CLI, still at your app's root directory, run:

$ docker-compose up
Enter fullscreen mode Exit fullscreen mode

In case your image is hosted on a registry e.g. AWS ECR, you will need to have access to it on your server before you can successfully run the docker-compose. To do that:

  1. Install AWS-CLI
  2. Login to the ECR (Elastic Container Registry). Simply run:
$ aws ecr get-login-password --region <region> | docker login --username AWS --password-stdin <registry>
Enter fullscreen mode Exit fullscreen mode

You will be requested to provide certain credentials, which you should find on your AWS dashboard/profile:

  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY
  • AWS_DEFAULT_REGION

Proxy Networking

Create and/or open the file for your site, contained in the sites available to your NGINX

$ sudo nano /etc/nginx/conf.d/sites-available/<your_domain_name>
Enter fullscreen mode Exit fullscreen mode

Now in the file, edit its content to something similar.

// /etc/nginx/conf.d/sites-available/<your_domain_name>

server {
        listen 80;
        listen [::]:80;

        server_name <your_domain> [www.<your_domain>];

        location / {
                proxy_pass  http://<your_service>;
                try_files $uri $uri/ =404;
        }
}
Enter fullscreen mode Exit fullscreen mode

After successfully editing and saving the file, create a sites-enabled folder if it doesn't exist. This folder will contain sites enabled for access via the NGINX.

Afterwards, symbolically link the sites-available to the sites-enabled. This will cause automatic updates from the sites available to the sites enabled.

$ cd /etc/nginx/conf.d
mkdir sites-enabled && cd sites-enabled
$ ln -s ../sites-available/plex.conf .
Enter fullscreen mode Exit fullscreen mode

Change the NGINX sites lookup reference to the sites-enabled.

$ sudo nano /etc/nginx/nginx.conf
Enter fullscreen mode Exit fullscreen mode

Change the line include /etc/nginx/conf.d/*.conf; to include /etc/nginx/conf.d/sites-enabled/*.conf;

When all is successfully setup, restart the NGINX.

$ sudo service nginx restart
Enter fullscreen mode Exit fullscreen mode

Now, you should be able to access the service you just created on your browser, or on your terminal with an http agent e.g. curl, or Postman.

Conclusion

Hopefully this tutorial is helpful. Please leave your comments below. Feedback and more insights are very much welcome.

Top comments (0)