DEV Community

Cover image for Setup Graylog on Synology DSM 7.x with Docker for UniFi logs
Vitor Monteiro
Vitor Monteiro

Posted on

Setup Graylog on Synology DSM 7.x with Docker for UniFi logs

graylog-ui

If you're just looking for a functional docker.compose.yml that works on Synology DSM 7, you can find it here.

I recently encountered some issues with my UniFi devices and attempted to review the logs for troubleshooting. However, I found that the UniFi OS lacks efficient log management capabilities. With this in mind, I recalled that Lawrence Systems utilizes Graylog for indexing and reading UniFi logs, so I set out to do the same myself.

I run most of my shared services on my Synology NAS that's running DSM 7. To ensure reliability and ease of restoration in case of a data loss, I have invested time in running all my services via docker-compose and regularly backup all my service configurations. While adding Graylog to my setup through this method should have been relatively simple, the particular context of DSM and my specific NAS created some complications.

So how hard can it be to just add a few more lines to the docker-compose.yml file and spin up Graylog for all my log ingestion needs? Not hard, but troublesome in the particular context of DSM and your particular NAS...

Setup Overview

This is my target setup:

  • Run Graylog's most recent version as a shared service in my NAS
  • Use docker and docker-compose to define all required services
  • Forward logs from my UniFi controller to Graylog
  • Consume those logs via any internal client in my network

graylog-nas-docker-dsm7-setup-diagram

Issues running vanilla configuration on DSM 7

When I was looking for official docs on the matter I found two links which give you slightly different instructions:

They differ in some aspects, but ultimately, as I tried to run and fix the provided docker-compose.yml for DSM 7, I started encountering some issues ๐Ÿ‘‡.

MongoDB 5 requires AVX Support on the NAS CPU


 MongoDB 5.0+ requires a CPU with AVX support, and your current system does not appear to have that!

Enter fullscreen mode Exit fullscreen mode

So what exactly is AVX? ๐Ÿค”

Advanced Vector Extensions (AVX) are extensions to the x86 instruction set architecture for microprocessors from Intel and Advanced Micro Devices (AMD). They were proposed by Intel in March 2008 and first supported by Intel with the Sandy Bridge[1] processor shipping in Q1 2011 and later by AMD with the Bulldozer[2] processor shipping in Q3 2011. AVX provides new features, new instructions and a new coding scheme. - Wikipedia

It seems this is not a thing in Celeron CPUs and as such support on Synology NAS will be reduced, especially for the home and semi-pro variants of their products.

โœ… Solution: Downgrade MongoDB to latest 4.x which doesn't require AVX support.

Graylog 5 requires MongoDB 5


 You're running MongoDB 4.4.18 but Graylog requires at least MongoDB 5.0.0. Please upgrade.

Enter fullscreen mode Exit fullscreen mode

As an end-user who primarily wants to ingest logs into Graylog and be able to query them, I was not overly concerned with this downgrade.

โœ… Solution: Downgrade Graylog to latest 4.x which doesn't require MongoDB 5.x

Incorrect mapping of elasticsearch folder

This is the issue that really got me. When playing with docker every time I have path access issues it's either the user I'm defining lacking permissions of the local folder I'm mapping simply doesn't exist. While troubleshooting this, I went nuts trying to find why docker didn't have access to the folder in question, to the point I gave up. After a day or so I picked up this project again, and then I found a Stackoverflow post suggesting to change the path on the docker image itself:

stackoverflow-post

Solution โœ… : Change the elasticsearch internal path from /usr/share/elasticsearch/data to /var/lib/elasticsearch/data

Graylog not able to write files in the mapped volume

Unfortunately, this is an issue for which I do not have a solution on DSM. As it stands, graylog doesn't use the root user in the container, so it creates a user:group 1100:1100. I've read plenty of solutions to just chown the folder to that user:group but this didn't solve it.

I then tried to create a group and user on DSM with addgroup, but then I found that DSM does not allow you to create users and groups with specific ids. I went to the trouble of trying to create some of the files that graylog was trying to generate like the graylog.conf, but as I was doing that I've figured this will be a non-ending problem.

ERROR: Unable to access file /usr/share/graylog/data/journal/graylog2-committed-read-offset: Permission denied #2155

Problem description

ERROR: Unable to access file /usr/share/graylog/data/journal/graylog2-committed-read-offset: Permission denied

Steps to reproduce the problem

While doing a docker-compose up Graylog server is stopping with the above error

Here is the docker-compose file

mongo:
  image: "mongo:3"
  volumes:
    - /graylog/data/mongo:/data/db
elasticsearch:
  image: "elasticsearch:2"
  command: "elasticsearch -Des.cluster.name='graylog'"
  volumes:
    - /graylog/data/elasticsearch:/usr/share/elasticsearch/data
graylog:
  image: graylog2/server:2.0.0-rc.1-1
  volumes:
    - /graylog/data/journal:/usr/share/graylog/data/journal
    - /graylog/config:/usr/share/graylog/data/config
  environment:
    GRAYLOG_PASSWORD_SECRET: somepasswordpepper
    GRAYLOG_ROOT_PASSWORD_SHA2: 8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
    GRAYLOG_REST_TRANSPORT_URI: http://127.0.0.1:12900

  links:
    - mongo:mongo
    - elasticsearch:elasticsearch
  ports:
    - "9000:9000"
    - "12900:12900"

Environment

  • Graylog Version: 2
  • Elasticsearch Version:2
  • MongoDB Version: 3
  • Operating System: CentOS 7
  • Browser version:

Working docker file

As it is, I've failed to setup a persistent configuration for graylog on DSM. If anyone has solved this please let me know in the comments. The docker file you see below will use volatile storage, so you'll loose your logs and configurations upon a single container stop.

Setting up a new input on Graylog

After Graylog is running, and you login via http://hostname:9000, you'll be alerted there are no inputs set. Navigate to System -> Inputs:

system-inputs-menu

On the Inputs screen, find the dropdown with Select Input and pick Syslog UDP:

syslog-udp-input

Give it whatever name you'd like. The input is pretty much ready to go, with the exception of the default port. Change it from 514 to 1514:

Image description

This is it on the Graylog side, lets go to the Unifi controller to forward the logs.

Setting up log forwarding on UniFi controller

When I say UniFi controller, I mean any of their controller products: CloudKey, UniFi Dream Router or UniFi Dream Machine. I'm personally a CloudKey user.

Go into Network -> Settings and expand support, until you see this block. Tick the Syslog checkbox, and fill in your NAS hostname, as well as the port 1514

unifi-log-fw

Checking our logs on Graylog

If everything was configured properly you should see your UniFi logs on graylog. If you have any comments and improvements for the post, I'm always looking for advice and guidance ๐Ÿ™‡

graylog-ui

Top comments (2)

Collapse
 
jamesmiah profile image
Jamal Miah

Here it is with persistent storage:

version: "3.7"

services:
  mongo:
    container_name: mongo
    image: mongo:4.4.18
    # Map the data directory inside the container to a directory on the host machine to make it persistent
    volumes:
      - mongo_data:/data/db
    networks:
      - graylog

  elasticsearch:
    container_name: elasticsearch
    image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.10.2
    environment:
      - http.host=0.0.0.0
      - transport.host=localhost
      - network.host=0.0.0.0
      - "ES_JAVA_OPTS=-Dlog4j2.formatMsgNoLookups=true -Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    deploy:
      resources:
        limits:
          memory: 1g
    # Map the data directory inside the container to a directory on the host machine to make it persistent
    volumes:
      - elasticsearch_data:/usr/share/elasticsearch/data
    networks:
      - graylog

  graylog:
    container_name: graylog
    image: graylog/graylog:4.3.11
    environment:
      - GRAYLOG_PASSWORD_SECRET=CHANGEME_MIN16CHARS
      # Password: the password below is `admin` sha256 hashed, make your own with: echo -n your_pass | sha256sum
      - GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
      - GRAYLOG_HTTP_EXTERNAL_URI=http://127.0.0.1:9000/
    # Map the data directory inside the container to a directory on the host machine to make it persistent
    volumes:
      - graylog_data:/usr/share/graylog/data
    entrypoint: /usr/bin/tini -- wait-for-it elasticsearch:9200 --  /docker-entrypoint.sh
    networks:
      - graylog
    restart: always
    depends_on:
      - mongo
      - elasticsearch
    ports:
      # Graylog web interface and REST API
      - 9000:9000
      # Syslog TCP
      - 1514:1514
      # Syslog UDP
      - 1514:1514/udp
      # GELF TCP
      - 12201:12201
      # GELF UDP
      - 12201:12201/udp

networks:
  graylog:
    driver: bridge

# Define the volumes to be used for persistent storage
volumes:
  mongo_data:
  elasticsearch_data:
  graylog_data:

Enter fullscreen mode Exit fullscreen mode
Collapse
 
anas92230 profile image
Anas92230

Hi,
Very interresting Post. Thank you.
Did you find any solution to solve it?
Other remark : I send log from Synology to grayling but any message received! I receive (I can see it on data received on input) Log packet from Synology but I can see it.
Thanks for your feedback