DEV Community

Cover image for Log centralization and security alert with ELK (Part 2)
Hored Otniel
Hored Otniel

Posted on

Log centralization and security alert with ELK (Part 2)

Elastalert

Assume that your information system has an enormous cloud infrastructure with lots of servers. As a cybersecurity engineer, it is your duty to put in place a set of processes to detect suspicious activities on the server, isn't it ?

If you know exactly what you're looking for, it's pretty easy to straightly head to the logs, inspect them and pull those things out. But in this specific case where you have loads of logs on different files, you obviously need a log management system and most importantly a tool to keep you alert to the slightest incident on your servers. This is where ELK and Elastalert2 come in

ELK Stack

In the first part of this series, we worked on deploying an ELK cluster and how to install beats on servers for monitoring. You should check this part if you haven't already because it is important to understand what will be going on in this second part.

Elastalert

ElastAlert is a simple framework for alerting on anomalies, spikes, or other patterns of interest from data in Elasticsearch. This tool allows you to write your rules for the alerts you want.

ElastAlert queries Elasticsearch and provides an alerting mechanism with multiple output types, such as Slack, Email, JIRA, OpsGenie, among others.

We are going to exploit here elastalert2 because the initial project is no longer maintained.

We will use our cluster previously set up in the first part.

Installation

Note that there are various methods to deploy elastalert. You can use a docker container but I preferred to use the python package option.

git clone https://github.com/jertel/elastalert2.git
Enter fullscreen mode Exit fullscreen mode

Install the module:

pip install "setuptools>=11.3"
python setup.py install
Enter fullscreen mode Exit fullscreen mode

This being done, you know that elasticsearch works with an index and it is important to create the index where Elastalert will store its data. For this purpose, the tool has prepared everything necessary.

elastalert-create-index
Enter fullscreen mode Exit fullscreen mode

You will need to provide some informations such as the host, the port, the username to connect, the password or other optional information.

indexe-creation

Great !! You just installed a very powerful tool for your security alerts. But keep in mind that in this article, we will address just a few basic rules.

Before going any further, you must choose and configure the means by which you wish to receive emails. Here I chose to use Slack.

Configuring Slack

You need to create the webhook on your slack channel.

For this we can follow this documentation: https://slack.com/help/articles/115005265063-Incoming-webhooks-for-Slack

We will first need to create an app in slack

webhook

Then add incoming webhooks as feature. You can connect it to the workspace you want to use. After that you need to copy the url you will use.

webhook-2

Once that is done, we can move on by writing our first rule.

First rule

So to test this tool we will use the rule examples/rules/example_frequency.yaml as a template.
Befor going on let's talk about the basics options you need to understand.
As it explained in the doc :

  • es_host and es_port should point to the Elasticsearch cluster we want to query.

  • name attribute must be unique. ElastAlert 2 will not start if two rules share the same name.

  • type: Each rule has a different type which may take different parameters. The frequency type means “Alert when more than num_events occur within timeframe.”

  • num_events: This parameter is specific to frequency type and is the threshold for when an alert is triggered.

  • timeframe is the time period in which num_events must occur.

  • filter is a list of Elasticsearch filters that are used to filter results. Here we have a single term filter for documents with some_field matching some_value

  • alert is a list of alerts to run on each match.

That is the most basic information you need to understand for this first rule.

What we will do is to make an alert about all command that has been executed as "root".

To do this let's check some informations on our dashboard.

Remember that in the first part we deployed filebeat to collect information such as system logs. To write our rule, watch this dashboard :

dashboard

So we can see that in the last 15 minutes there has been a command executed as sudo.

This first alert will therefore consist of sending a message in our slack app as soon as the is executed. To do this, if you have understood the explanations of the options correctly, we are going to put:

type: frequency

index: filebeat-*

num_events: 1

timeframe:
  minutes: 1

filter:
- query:
    query_string:
      query: "system.auth.sudo.command: *"
Enter fullscreen mode Exit fullscreen mode

Now we read in indexes prefixed with filebeat and as soon as the event occurs at least once in a minute. For the event in question, we use the fields that filebeat proposes to do the search. On the dashboard previously illustrated you certainly noticed the system.auth.sudo.command field, so it was relatively simple to write this filter.

Then the other very important part is the alerts

realert:
  minutes: 1

query_key:
  - host.ip

include:
  - host.hostname
  - user.name
  - host.ip

include_match_in_root: true

alert_subject: "sudo command on <{}>"
alert_subject_args:
  - host.hostname

alert_text: |-
  A command was executed as root on {}.
  Informations:
  User: {}
  IP: {}
alert_text_args:
  - host.hostname
  - user.name
  - host.ip
Enter fullscreen mode Exit fullscreen mode

Remember the importance of realert here. The time defined will be the frequency of sending alerts. The other options use fields present in filebeat indexes for the alert.

The other part is the configuration of the alert tool in this case slack in our case.

alert:
  - slack:
      slack_webhook_url: "your_url"
      slack_username_override: "your_username"
Enter fullscreen mode Exit fullscreen mode

You will also need to configure the config.yaml file in the example subdirectory. Once done, we'll test our rule:

elastalert-test-rule --config examples/config.yaml examples/rules/example_frequency.yaml --alert
Enter fullscreen mode Exit fullscreen mode

And voilà!

Alerte

As you can see, we received a message at the channel level. I hid sensitive information but you may notice the title and content as formatted in our alert.

Automating

As you may have noticed, we just tested our rule with the elastalert-test-rule command. But as you can guess, to secure a real information system, alerts must reach you in real time without you needing to enter a command each time.

So to automate the alert process, we'll go back to our config.yaml file.

The method I use is quite simple. In this file you have the possibility to indicate the folder where Elastalert will look for the rules with the option rules_folder. You also have the possibility to indicate other options which will be applied by default to all the rules. For example, es_host; es_port; es_username or es_username are fixed values.

Once this file is prepared and your rules are well written, all you have to do is write a cron job that will keep elastalert up to send you the alerts at your convenience.

5 4 5 10 5 /usr/local/bin/elastalert --config /opt/elastalert/config/config.yaml --verbose

Enter fullscreen mode Exit fullscreen mode

I chose the periodicity of the job very randomly but you got the idea 😎️. So your alerts will reach you all the time.

Use case

Here we have covered the sudo commands but you have a non-exhaustive list of rules that you can set up using the appropriate beat indexes (filebeat, metricbeat...).

  • Disk space alert
  • ssh connections alert
  • Alert on uptime (heartbeat)
  • ...

Elastalert is a very rich tool in terms of possibilities. Be imaginative and have fun with it.

Top comments (0)