In this blog, we will see how to create a Grafana stack and fluent-bit on docker along with a nodejs application.
Introduction to the stack:
Grafana stack includes - Grafana (admin web portal), Loki (datastore for logs), and fluent-bit (logs collector). purpose of fluent-bit is to fetch logs from the origin server, add filters on the logs and send them to the data store. purpose of Loki is to store logs with indexing and metadata on a system. purpose of Grafana is to get analyze/query/monitor your service by providing Admin UI running on an individual system.
we can also use ELK (elastic search, logstash, kibana) stack for log aggregation and monitoring for a microservices application. let's check out the difference between both stacks.
Elasticsearch is a search engine tool and it is built with Lucene. elastic search stored unstructured data as a JSON object in its data store which was collected from logstash
and kibana let the user visualize logs on the admin portal.
elastic search stores data in the following manner. it will indexes all the contents that were provided by logstash and store the document. it will make elastic search searchable from every key in the document thus it required more space for storage.
Loki is a log aggregation tool that will also store data as a key-value pair. but it also stores labels with log data. so data from Loki is searchable with the use of labels thus it creates low indexes in the data storage system and it will make it more storage efficient.
What to use:
In terms of storage efficiency and fewer log streams loki is a good choice as it doesn't use much storage space and can fetch logs streams based on labels. but, if you have a large dataset with logs for example, you have metadata and other extra fields for your logs then loki will take much time to fetch log streams as it will create multiple labels and create multiple log streams on your data. in this case, the elastic search can be a good option for storing this kind of log data as it will create indexes for all the fields that you store in the data store.
Let's begin with the integration of the Grafana stack with docker.
Here, we're using the custom image of fluent-bit as we're using it for the Grafana stack. Grafana provides its fluent bit image for the integration with loki. you can check out the documentation here.
fluent-bit folder on the root of your project, and in that folder create
COPY ./conf/fluent.conf /fluent-bit/etc/fluent-bit.conf
We're passing the configuration to the docker image. create a configuration in a fluent-bit folder. create
conf folder and in that folder create file
LOKI_URL here as an environment variable.
To update the
logging in the docker container update the docker-compose configuration for the server in your
Here, in the docker-compose configuration, the used driver is fluentd as fluent-bit is also a part of fluentd system. in the options
fluentd-async: true will make an async connection with fluent-bit instance of the docker env. so that the order service will start the server in docker without any errors. the tag will be used in Grafana UI to identify each service.
To start our microservice application write down a command in the terminal
docker-compose up and it will start all docker containers together.
As per the docker-compose configuration file, the Grafana instance should be running on port 3000. open the Grafana instance in any browser but hit localhost URL:
http://localhost:3000. use the default username
admin and password
admin to get access to the portal. it will also ask you to enter your preferred password after login.
After login, it will show the dashboard screen.
Click on add a data source.
Click on Loki and enter URL:
http://loki:3100 as we're using loki in docker. so we can use the docker container name to access the service in localhost.
save & test, it will save the data source in grafana admin. and then click on
explore to open the query dashboard.
run query to show logs for every job that is available in the grafana stack. if you're looking for specific logs of any service. you can query logs from the
To learn more about the query language, you can check out the documentation provided by grafana on logQL.
Adding centralized logging will make the microservice application easier to debug/monitor as we have single UI to monitor all services. usage of centralized logging will be more beneficial when the number of services will increase in the system and monitoring of services together gets difficult on the cloud platform. choosing a right logging stack will make a huge difference in debugging process of production application when it crashes.
Thanks for reading this. If you've any queries, feel free to email me at firstname.lastname@example.org.
Until next time!