DEV Community

Peter Shekindo
Peter Shekindo

Posted on

How to implement logging in your REST service by using Elasticsearch - PART 2

How to implement logging in your REST service by using Elasticsearch - PART 2

In part 1 of this article series I explained the introduction to the ELK stack and how it can be used to implement logging in your REST service. You can find the first section of this series in the following links medium, dev, and hashnode. In this section, we will discuss how we can install, configure and use the ELK stack.

For logging purposes, Elasticsearch comes with two other tools Kibana and Logstash. Together they form the ELK stack as introduced in part one of this article.
To avoid confusion and reading fatigue, I will divide this second part into two sections as well

Part 2.A: Install and configure Elasticsearch

Part 2.B: Install and configure Kibana and Logstash as well as how to use the ELK stack

drawing

Logging Process with ELK stack

Part 2.A: Install and configure Elasticsearch

Initially, we need to download Elasticsearch, Kibana, and Logstash. Below are the links where you may download these tools. Currently, I am using version 8.3.1 in a Linux environment, but the version might vary for different environments and times. Also in this article, we will be using APT (Advanced Package Tool) which is a Linux installation package manager to download and install all tools.

Note:
In this article, we mainly focus on accessing logs logged in an external file or files. This means you need to implement an external logging service to a file or files in the REST architecture of your choice. Thus all logs should be accessed from a specific file or files.

Steps 1 - Installing Elasticsearch

To begin, we need to configure the Ubuntu package repository by adding Elastic’s package source list in order to download and install Elasticsearch. This is not configured by default, so we need to do it manually.

Note:
All of the packages are signed with the Elasticsearch public GPG key in order to protect your system from package spoofing. Packages authenticated using a key are considered secured by the downloading manager.

a. Open the terminal and use the cURL command-line tool for transferring data with URL, to import the Elasticsearch public GPG key into APT. We are also using the arguments -fsSL to silence all progress and possible errors (except for a server failure) and to allow cURL to make a request on a new location if redirected:

src="https://carbon.now.sh/embed?bg=rgba%28171%2C+184%2C+195%2C+1%29&t=seti&wt=none&l=application%2Fx-sh&width=680&ds=true&dsyoff=20px&dsblur=68px&wc=true&wa=true&pv=56px&ph=56px&ln=false&fl=1&fm=Hack&fs=14px&lh=133%25&si=false&es=1x&wm=false&code=%2524%2520curl%2520-fsSL%2520https%253A%252F%252Fartifacts.elastic.co%252FGPG-KEY-elasticsearch%2520%257C%2520sudo%2520apt-key%2520add%2520-"
style="width: 824px; height: 210px; border:0; transform: scale(1); overflow:hidden;"
sandbox="allow-scripts allow-same-origin">

b. add the Elastic source list to the sources.list.d directory, where APT will look for new sources:

src="https://carbon.now.sh/embed?bg=rgba%28171%2C+184%2C+195%2C+1%29&t=seti&wt=none&l=application%2Fx-sh&width=680&ds=true&dsyoff=20px&dsblur=68px&wc=true&wa=true&pv=56px&ph=56px&ln=false&fl=1&fm=Hack&fs=14px&lh=133%25&si=false&es=1x&wm=false&code=%2524%2520echo%2520%2522deb%2520https%253A%252F%252Fartifacts.elastic.co%252Fpackages%252F7.x%252Fapt%2520stable%2520main%2522%2520%257C%2520sudo%2520tee%2520-a%2520%252Fetc%252Fapt%252Fsources.list.d%252Felastic-7.x.list"
style="width: 924px; height: 220px; border:0; transform: scale(1); overflow:hidden;"
sandbox="allow-scripts allow-same-origin">

c. update your package lists so APT will read the new Elastic source:

src="https://carbon.now.sh/embed?bg=rgba%28171%2C+184%2C+195%2C+1%29&t=seti&wt=none&l=application%2Fx-sh&width=680&ds=true&dsyoff=20px&dsblur=68px&wc=true&wa=true&pv=56px&ph=56px&ln=false&fl=1&fm=Hack&fs=14px&lh=133%25&si=false&es=1x&wm=false&code=%2524%2520sudo%2520apt%2520update"
style="width: 300px; height: 210px; border:0; transform: scale(1); overflow:hidden;"
sandbox="allow-scripts allow-same-origin">

d. Use this command to install Elasticsearch

src="https://carbon.now.sh/embed?bg=rgba%28171%2C+184%2C+195%2C+1%29&t=seti&wt=none&l=application%2Fx-sh&width=680&ds=true&dsyoff=20px&dsblur=68px&wc=true&wa=true&pv=56px&ph=56px&ln=false&fl=1&fm=Hack&fs=14px&lh=133%25&si=false&es=1x&wm=false&code=%2524%2520sudo%2520apt%2520install%2520elasticsearch"
style="width: 410px; height: 208px; border:0; transform: scale(1); overflow:hidden;"
sandbox="allow-scripts allow-same-origin">

If you have reached this far without any error, that means Elasticsearch is now installed and ready to be configured. 🎉

Steps 2 - Configuringing Elasticsearch

All Elastic search configuration goes into elasticsearch.yml

a. Use the command to access elasticsearch.yml file

src="https://carbon.now.sh/embed?bg=rgba%28171%2C+184%2C+195%2C+1%29&t=seti&wt=none&l=application%2Fx-sh&width=680&ds=true&dsyoff=20px&dsblur=68px&wc=true&wa=true&pv=56px&ph=56px&ln=false&fl=1&fm=Hack&fs=14px&lh=133%25&si=false&es=1x&wm=false&code=%2524%2520sudo%2520nano%2520%252Fetc%252Felasticsearch%252Felasticsearch.yml"
style="width: 500px; height: 208px; border:0; transform: scale(1); overflow:hidden;"
sandbox="allow-scripts allow-same-origin">

There is a lot you can configure in Elasticsearch such as cluster, node, path, memory, network, discovery, and gateway. Most of these configurations are already preconfigured in the file but you can change them as you see fit.
For the sake of this tutorial, we will only change the network host configuration to allow single server access.

Elasticsearch listens for traffic from everywhere on port 9200. For this reason You may want to restrict outside access to your Elasticsearch instance to prevent outsiders from reading your data or shutting down your Elasticsearch cluster through its [REST API]

In order to accomplish this, find the line that specifies network.host, uncomment it, and replace its value with custom IP address like this:

src="https://carbon.now.sh/embed?bg=rgba%28171%2C+184%2C+195%2C+1%29&t=seti&wt=none&l=auto&width=680&ds=true&dsyoff=20px&dsblur=68px&wc=true&wa=true&pv=56px&ph=56px&ln=false&fl=1&fm=Hack&fs=14px&lh=133%25&si=false&es=1x&wm=false&code=.%2520.%2520.%250A%2523%2520-----------------Network%2520-----------------%250A%2523%250A%2523%2520Set%2520the%2520bind%2520address%2520to%2520a%2520specific%2520IP%2520%28IPv4%2520or%2520IPv6%29%253A%250A%2523%250Anetwork.host%253A%2520custom%2520IP%2520address%250A.%2520.%2520."
style="width: 600px; height: 330px; border:0; transform: scale(1); overflow:hidden;"
sandbox="allow-scripts allow-same-origin">

b. If you accessed the configuration file by nano, use the following key combination to save and close the file CTRL+X or ⌘ + X in Macintosh, followed by Y and then ENTER.

Steps 3 - Configuring Elasticsearch

We use systemctl command to start Elasticsearch service, this will allow Elasticsearch to initiate properly otherwise it will run into error and fail to start.

a. Open terminal and run command.

src="https://carbon.now.sh/embed?bg=rgba%28171%2C+184%2C+195%2C+1%29&t=seti&wt=none&l=application%2Fx-sh&width=680&ds=true&dsyoff=20px&dsblur=68px&wc=true&wa=true&pv=56px&ph=56px&ln=false&fl=1&fm=Hack&fs=14px&lh=133%25&si=false&es=1x&wm=false&code=%2524%2520sudo%2520systemctl%2520start%2520elasticsearch%250A"
style="width: 450px; height: 230px; border:0; transform: scale(1); overflow:hidden;"
sandbox="allow-scripts allow-same-origin">

b. You can also enable Elasticsearch to automatically run on every system boot.

src="https://carbon.now.sh/embed?bg=rgba%28171%2C+184%2C+195%2C+1%29&t=seti&wt=none&l=auto&width=680&ds=true&dsyoff=20px&dsblur=68px&wc=true&wa=true&pv=56px&ph=56px&ln=false&fl=1&fm=Hack&fs=14px&lh=133%25&si=false&es=1x&wm=false&code=%2524%2520sudo%2520systemctl%2520enable%2520elasticsearch%250A"
style="width: 450px; height: 230px; border:0; transform: scale(1); overflow:hidden;"
sandbox="allow-scripts allow-same-origin">

c. Run the following command to test your Elasticsearch. note, as for me Elasticsearch is running on localhost:9200. you may want to specify which IP:Port address your Elasticsearch is using upon.

src="https://carbon.now.sh/embed?bg=rgba%28171%2C+184%2C+195%2C+1%29&t=seti&wt=none&l=auto&width=680&ds=true&dsyoff=20px&dsblur=68px&wc=true&wa=true&pv=56px&ph=56px&ln=false&fl=1&fm=Hack&fs=14px&lh=133%25&si=false&es=1x&wm=false&code=%2524%2520curl%2520-X%2520GET%2520%2522localhost%253A9200%2522"
style="width: 400px; height: 210px; border:0; transform: scale(1); overflow:hidden;"
sandbox="allow-scripts allow-same-origin">

If every thing went well, you will see a response showing some basic information about your local node, similar to this:

src="https://carbon.now.sh/embed?bg=rgba%28171%2C+184%2C+195%2C+1%29&t=seti&wt=none&l=application%2Fx-sh&width=680&ds=true&dsyoff=20px&dsblur=68px&wc=true&wa=true&pv=56px&ph=56px&ln=false&fl=1&fm=Hack&fs=14px&lh=133%25&si=false&es=1x&wm=false&code=%250A%257B%250A%2520%2520%2522name%2522%2520%253A%2520%2522Elasticsearch%2522%252C%250A%2520%2520%2522cluster_name%2522%2520%253A%2520%2522elasticsearch%2522%252C%250A%2520%2520%2522cluster_uuid%2522%2520%253A%2520%2522qqhFHPigQ9e2lk-a7AvLNQ%2522%252C%250A%2520%2520%2522version%2522%2520%253A%2520%257B%250A%2520%2520%2520%2520%2522number%2522%2520%253A%2520%25227.7.1%2522%252C%250A%2520%2520%2520%2520%2522build_flavor%2522%2520%253A%2520%2522default%2522%252C%250A%2520%2520%2520%2520%2522build_type%2522%2520%253A%2520%2522deb%2522%252C%250A%2520%2520%2520%2520%2522build_hash%2522%2520%253A%2520%2522ef48eb35cf30adf4db14086e8aabd07ef6fb113f%2522%252C%250A%2520%2520%2520%2520%2522build_date%2522%2520%253A%2520%25222020-03-26T06%253A34%253A37.794943Z%2522%252C%250A%2520%2520%2520%2520%2522build_snapshot%2522%2520%253A%2520false%252C%250A%2520%2520%2520%2520%2522lucene_version%2522%2520%253A%2520%25228.5.1%2522%252C%250A%2520%2520%2520%2520%2522minimum_wire_compatibility_version%2522%2520%253A%2520%25226.8.0%2522%252C%250A%2520%2520%2520%2520%2522minimum_index_compatibility_version%2522%2520%253A%2520%25226.0.0-beta1%2522%250A%2520%2520%257D%252C%250A%2520%2520%2522tagline%2522%2520%253A%2520%2522You%2520Know%252C%2520for%2520Search%2522%250A%257D%250A"
style="width: 600px; height: 580px; border:0; transform: scale(1); overflow:hidden;"
sandbox="allow-scripts allow-same-origin">

Now that Elasticsearch is up and running, in the next section which is Part 2.B of this article series, we will install Kibana, and Logstash and test our Logging configuration.

Top comments (0)