DEV Community

Cover image for Monitor Your Application with Elastic stack
Rahul Kumar
Rahul Kumar

Posted on • Edited on

Monitor Your Application with Elastic stack

What is ELK ?

Due to rise of microservices in a large enterprise we need to monitor our application to release the better upgraded version and analyze the application . so for that we have elk stack who can help us to achieve that kind of functionality .
It is a set of different tools used for analyzize the data and extract the meaningful observation about how the souce is working.
first we collect the data and then we will store it and extract the meaningful observation .

What are the components of ELK :

  • Elasticsearch : used for storing and searching collected data .
  • Logstash : used for collecting and filtering the input data .
  • kibana : provides a graphical user interface .
  • Beats : Multiple light weight data collectors .

Elasticsearch :

Elasticsearch is a NOSQL database that was developed based on apache lucene search engine . It can be used to index and store multiple different types of documents and data . It provides a function to search the data that is stored in real-time as it's being fed .
Elasticsearch logo

Logstash :

Logstash is a collection agent and is used to collect both heterogenous/non-heterogenous data from various sources . It has the capability to screen , breakdown and make string alternative in the data it collects . After it collect and filters the data then it send it to Elasticsearch for search .
Logstash Logo

Kibana :

Kibana is a graphical user interface that is used to display the data that was stored in Elasticsearch .
Kibana

Beats :

Beats is similar to Logstash in the matter of fact that they both collect the data that will be later stored and analyzed , but beats are multiple small software installed on different servers from where they collect the data and send it to Elasticsearch .
Beats family

Hands on :

Install Elasticsearch :

$sudo wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.9.0-amd64.deb

$sudo dpkg -i elasticsearch-7.9.0-amd64.deb

Install Logstash :

$sudo wget https://artifacts.elastic.co/downloads/logstash/logstash-7.9.0.deb

$sudo dpkg -i logstash-7.9.0.deb

Install Kibana :

$sudo wget https://artifacts.elastic.co/downloads/kibana/kibana-7.9.0-amd64.deb

$sudo dpkg -i kibana-7.9.0-amd64.deb

Install filebeat :

$sudo wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.9.0-amd64.deb

$sudo dpkg -i filebeat-7.9.0-amd64.deb

Next we have to generate a password for admin user so that he can login to the kibana dashboard .

For that we need to install

$sudo apt install apache2-utils

$sudo apt install htpasswd

make directory where we can store our password so that our web server can authenticate

$sudo htpasswd -c /etc/nginx/htpasswd.users kibadmin

$sudo /etc/nginx/sites-available/default

server {
       listen 80;
       server_name Instance Public IP;
       auth_basic "Restricted Access";
       auth_basic_user_files /etc/mginx/htpasswd.users;
       location / {
            proxy_pass http://localhost:5601;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection 'Upgrade';
            proxy_set_header Host $host;
            proxy_cache_bypass $http_upgrade;
       }
} 
Enter fullscreen mode Exit fullscreen mode
  • Now we have configured Nginx let's start the service .

$sudo systemctl start nginx

  • open the browser

http://{your instance IP address}

  • After successful login by providing the credentials you could see the dashboard .

  • Now let's download some sample data

$sudo wget https://logz.io/sample-data

$sudo cp sample-data apache.log

$cd /etc/logstash/conf.d

  • Create a new file for the pipeline purpose generally the supportive engineers will tell us how the data will be getting injected .

$touch apachelog.conf

$nano apachelog.conf

  • pipeline mainly consists of three things :
  • Input - source of the data
  • Filter - which kind of data not to sent
  • Output - where we want to send our data to
input {
  file {
    path => "/home/ubuntu/apache.log" <!-- location of the log -->
    start_position => "beginning"
    sincedb_path => "/dev/null" <!-- since there is no db we dont link to -->
  }
}
filter {
    grok [
      match => [ "message" => "%{COMBINEDAPACHELOG}" ]  
    ]
    date [
      match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss" ]
    ]
    geoip [
      source => "clientip"
    ]
}
output {   <!-- output is going to store in the elasticsearch -->
  elasticsearch [
    hosts => [ "localhost:9200" ]
//when you go to kibana you have to create a new index then only 
//you can access the data 
    index => "petclinic-prd-1"  <!-- so if you put perticlinic-prd* the logs will stored as this name -->
  ]
} 
Enter fullscreen mode Exit fullscreen mode

$sudo systemctl start logstash

Finally we can access our custom data and we have to name our index pattern in the discover section as petclinic-prd-1
And check as the timestamp .

  • Now you can explore the data on your own Kibana dashboard

Top comments (0)