An Introduction to Graylog

klauenboesch profile image Christian Originally published at globalelements.ch on ・2 min read

Graylog, recently released in version 2.5, is an alternative to the well-known ELK stack (Elasticsearch, Logstash, Kibana). In comparison to the ELK-stack, Graylog uses MongoDB as a storage backend for settings and authentication, and leverages Elasticsearch as a document store.

This post is going to be a part of a series that will explore Graylog in detail. Stay tuned!

The sample dashboard as shown in the graylog documentation.

If you’re looking for an easy-to-go application, that is yet quite powerful and can be customized quite well – and on top of that, is Open Source – Graylog might be your solution. Additionaly compared to the „classic“ ELK stack, Graylog provides a fully-fledged authentication backend, and also allows to integrate with any LDAP directory (for example, ActiveDirectory).

The key concept of Graylog are inputs, which are nothing else than definitions of „how to receive messages“. It supports the well-known Syslog format and the GELF format, which is a JSON-definition maintained by Graylog itself. GELF is supported through UDP and TCP, which makes Graylog quite powerful – delivering log messages through the internet is not an issue at all, as the TCP connection does support TLS for encrypted transfer. Graylog can also easily be configured to act as a relay and forward any (or messages matching a pattern) to another instance.

Inputs are routed into streams, which represent a collection of messages. Streams can be configured to be filled by messages matching a pattern (e.g. a regular expression). If you ever require to extract information from a log message, extractors come to help. Extractors allow to, well, extract data from a messages by applying regular expressions, and the converting the data to various formats, like date or IP-adresses.

If that is not enough, Graylog provides a concept called pipelines. Pipelines basically allow you to „code“ a custom, complex process on how an incoming log message might be processed. This can include modifying and routing a message. A classic example would be that a message is routed into a stream based on an IP address, but the IP address must be removed from the message before it is stored (e.g. any GDPR compliance).

Having implemented Graylog in multiple projects, we would love to assist you on your next project requiring a scalable, centralized and powerful logging application.

Der Beitrag An Introduction to Graylog erschien zuerst auf Global Elements GmbH.

Posted on by:

klauenboesch profile



C#, PHP and JS engineer. Doing lots of stuff using TFS, TeamCity, Jira, Docker, Linux. Blogging from my own company blog.


Editor guide

Love graylog, been running it in K8s for a while now..


We use it also quite a lot. I got in love with it after we had to use a Graylog instance as a relay to convert UDP to TCP. Nothing easier than that!


Its cheap too.



Quick question: I would like to append additional metadata (information about an object) scoped fields using Gelf.Logging.Extensions, how would I go about this and could I possibly add those fields using custom middle-ware or a custom logger so that you don't have to add those fields at every point where I log?

This is my current implementation:

public async Task HandleAsync(LoginAuditEvent auditEvent)
if (auditEvent.TraceData == null) auditEvent.TraceData = new Dictionary();
using (_logger.BeginScope(auditEvent.TraceData))...

Would this be okay or should I change my Dictionary definition to allow for objects to be logged?

Thank you


I'm not an expert on the Java implementation of GELF. I usually use GELF in C# or PHP. However, what I know from other implementations (and from GELF itself) that the format is pretty flexible and can adjust very much to your needs. Depending on the library you use, it might be possible to add some middle-ware that adds the metadate you want dynamically/automatically without the Logger object being aware of.


Is it a good choice for Log analysis after a dev environment is deployed or QA testing is in process?


I'm not quite sure what you mean by "log analyses AFTER", but yes, Graylog is an excellent choice for collecting and analyzing logs through any stages of your application (dev, stage, QA, prod, ...). It is simple, reliable and can be integrated easy into most applications.


I am currently learning ELK stack and going to implement it on a virtual environment. After that I will definitely look into Graylog and try to implement it too. Thanks for the suggestion.