DEV Community

Cover image for How to orchestrate your graph application with Docker Compose
Katarina Supe for Memgraph

Posted on • Updated on • Originally published at memgraph.com

How to orchestrate your graph application with Docker Compose

Introduction

Docker is an open platform for developing, shipping, and running applications. It helps you package and run an application in an isolated environment called a container. The container has everything needed to run the application, including the libraries that now don’t have to be installed on host machines. The Dockerized application is easy to share with someone and to run because it will work in the same way on different hosts. Docker Compose is a tool for defining and running multi-container Docker applications. All you need is a YAML file called docker-compose.yml in the root of your project in order to configure the application’s services.

Deploying applications with Docker Compose

Docker Compose can be used for production, staging, development, testing, or CI workflow environments. In Memgraph, we create many demo web applications to showcase Memgraph’s capabilities. When creating such applications, we follow these steps:

1. We create a Dockerfile to define a Docker image that will be used, copy certain directories and install necessary libraries inside the container. The application usually has a backend, frontend, and Memgraph (database) service.

my-application
├── backend
│   ├── app.py
│   └── Dockerfile
├── frontend
│   ├── src
│   │   └── App.js
│   └── Dockerfile
├── memgraph
│   └── Dockerfile
└── docker-compose.yml
Enter fullscreen mode Exit fullscreen mode

For memgraph service Dockerfile, we often copy the folder with newly defined procedures, called query modules.

FROM memgraph/memgraph-mage:1.1
USER root
# Copy the local query modules
COPY query_modules/twitch.py /usr/lib/memgraph/query_modules/twitch.py
USER memgraph
Enter fullscreen mode Exit fullscreen mode

In this example, we used memgraph/memgraph-mage image, version 1.1. We switched to user root so that the service within the container has the necessary permissions to copy the local query modules file and then gave the permissions back to the user memgraph.

2. Next, we define all services from the application in the docker-compose.yml file.

version: "3"

networks:
 app-tier:
   driver: bridge

memgraph:
   build: ./memgraph
​​   ports:
     - "7687:7687"
     - "7444:7444"
   networks:
     - app-tier

backend-app:
   build: ./backend
   volumes:
     - ./backend:/app
   ports:
     - "5000:5000"
   networks:
     - app-tier

frontend-app:
   build: ./frontend
   volumes:
     - ./frontend:/app
   ports:
     - "3000:3000"
   networks:
     - app-tier
Enter fullscreen mode Exit fullscreen mode

Every service is running on different ports and copies different necessary files from the host machine to the container in its Dockerfile. For example, if the frontend service is a React app, you need to copy the package.json file and run npm install to install all packages and dependencies needed for the React app to run. All services are on the same app-tier bridge network. A bridge network allows containers connected to the same bridge network, app-tier in this case, to communicate while providing isolation from containers that are not connected to that bridge network.

3. In the end, to build the application, we use docker-compose build, and to run it, we use docker-compose up command.

Once you define your application with Docker Compose, it’s pretty easy to run it in different environments. Before deploying it, you need to make just a couple of changes regarding the volume bindings, exposed ports, and similar. If you want to learn more about it, check out the Docker documentation. The easiest way to deploy the application is to run it on a single server since that is the most similar behavior to your development environment.

Memgraph Docker Compose file

Let’s check out what’s written inside the Memgraph Docker Compose file. Memgraph offers you three different Docker images:

  1. Memgraph Platform, which contains:
    • MemgraphDB - the database that holds your data
    • Memgraph Lab - visual user interface for running queries and visualizing graph data
    • mgconsole - command-line interface for running queries
    • MAGE - graph algorithms and modules library
  2. Memgraph MAGE, which contains MemgraphDB and MAGE.
  3. Memgraph, which includes only MemgraphDB.

Definition of a Memgraph service within your docker-compose.yml file depends on the image you are using. Since Memgraph Platform provides a wholesome solution, its image is most widely used. This is the Docker Compose for Memgraph Platform image:

version: "3"
services:
  memgraph-platform:
    image: "memgraph/memgraph-platform"
    ports:
      - "7687:7687"
      - "3000:3000"
      - "7444:7444"
    volumes:
      - mg_lib:/var/lib/memgraph
      - mg_log:/var/log/memgraph
      - mg_etc:/etc/memgraph
    environment:
      - MEMGRAPH="--log-level=TRACE"
    entrypoint: ["/usr/bin/supervisord"]
volumes:
  mg_lib:
  mg_log:
  mg_etc:
Enter fullscreen mode Exit fullscreen mode

Port 7687 is used for communication with Memgraph via Bolt protocol. The port 3000 is exposed because Memgraph Lab will be running on localhost:3000, while the port 7444 is there so that you can access logs within Memgraph Lab. We also specified three useful volumes:

  • mg_lib - directory containing data that enables data persistency
  • mg_log - directory containing log files
  • mg_etc - directory containing the configuration file

The exact location of the local directories depends on your specific setup.

Configuration settings can be changed by setting the value of the MEMGRAPH environment variable. In the above example, you can see how to set --log-level to TRACE. Since Memgraph Platform is not a single service, the process manager supervisord is used as the main running process in the entrypoint. Since the MAGE library is included in this image, you can use the available graph algorithms.

How we built Twitch analytics demo using Docker Compose

When we’re building demo applications to showcase Memgraph, we always use Docker Compose. This allows us to fire up the application on any system, which is useful when showing a demo at conferences or meetups. Also, applications created with Docker Compose are much easier to deploy. One such demo is the Twitch analytics demo. In the docker-compose.yml file, we defined a couple of services:

  • kafka - a message broker
  • zookeeper - a service that manages kafka service
  • memgraph-mage - a service that uses Memgraph MAGE Docker image which gives us the possibility to use graph algorithms, such as PageRank and betweenness centrality
  • twitch-app - a Flask server that sends all the data we query from memgraph-mage to the react-app
  • react-app - a React app that visualizes the Twitch network
  • twitch-stream - a Python script that produces new messages to a Kafka topic

Memgraph - How we built Twitch analytics demo using Docker Compose<br>

As mentioned before, all services are on the same app-tier bridge network to communicate with each other. To learn more about the implementation of this application, head over to the Twitch Streaming Graph Analysis blog post series.

Conclusion

Now that you learned how to use Docker Compose to orchestrate your graph application, we hope that the application development process with Memgraph will be more straightforward. Whichever Memgraph Docker image you decide to use, join our Discord server and showcase your creations!

Top comments (0)