Pre-requisite
Make sure java 8 is installed and also set java_home to /etc/default/elasticsearch
Installing Elasticsearch
Before starting, you will need to import the Elasticsearch public GPG key into apt. You can do this with the following command:
sudo wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
Then, you will need to add Elastic's package source list to apt.
To do this open the sources.list file:
sudo nano /etc/apt/sources.list
Add the following line:
deb http://packages.elastic.co/elasticsearch/2.x/debian stable main
Save the file and update the repository with the following command:
sudo apt-get update
Now, install the Elasticsearch with the following command:
sudo apt-get -y install elasticsearch
Once elasticsearch is installed, you will need to restrict outside access to the Elasticsearch instance, you can do this by editing the elasticsearch.yml file.
sudo /etc/elasticsearch/elasticsearch.yml
Find the line network.host and replace its value with localhost (previous value is 192.168.0.1)
Save the file and start elasticsearch service:
sudo /etc/init.d/elasticsearch start
Next, enable elasticsearch service to start at boot with the following command:
sudo update-rc.d elasticsearch defaults
Now, elasticsearch is up and running, it's time to test elasticsearch.
You can test elasticsearch with the following curl command:
curl localhost:9200
Installing Logstash
By default Logstash is not available in Ubutnu repository, so you will need to add Logstash source list to apt.
sudo nano /etc/apt/sources.list
Add the following line:
deb http://packages.elastic.co/logstash/2.2/debian stable main
Save the file and update the repository:
sudo apt-get update
Now, install the logstash with the following command:
sudo apt-get install logstash
Configure Logstash
Once logstash is installed, you will need to configure the logstash file located at /etc/logstash/conf.d directory. The configuration consists of three parts: inputs, filters, and outputs.
Before configuring logstash, create a directory for storing certificate and key for logstash.
sudo mkdir -p /etc/pki/tls/certs sudo mkdir /etc/pki/tls/private
Next, add IP address of ELK server to OpenSSL configuration file:
sudo nano /etc/ssl/openssl.cnf
Find the section [ v3_ca] and add the following line:
subjectAltName = IP: 192.168.1.7
Save the file and generate SSL certificate by running the following command:
Where 192.168.1.7 is your ELK server IP address.
cd /etc/pki/tls
sudo openssl req -config /etc/ssl/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/filebeat.key -out certs/filebeat.crt
Note that you will need to copy this certificate to every client whose logs you want to send to the ELK server.
Now, create the filebeat input configuration file with the following command:
sudo nano /etc/logstash/conf.d/beats-input.conf
Add the following lines:
input {
beats {
port => 5044
type => "logs"
ssl => true
ssl_certificate => "/etc/pki/tls/certs/filebeat.crt"
ssl_key => "/etc/pki/tls/private/filebeat.key"
}
}
Next, create logstash filters config file:
sudo nano /etc/logstash/conf.d/syslog-filter.conf
Add the following lines:
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
Last, create logstash outputs config file:
sudo nano /etc/logstash/conf.d/output.conf
Add the following lines:
output {
elasticsearch {
hosts => ["localhost:9200"]
}
stdout { codec => rubydebug }
}
Save the file.
Edit the file /etc/default/logstash with the below:
JAVACMD=/home/djomkam/Desktop/jdk1.8.0_162 /bin/java
export JAVACMD
Test your Logstash configuration with the following command:
sudo service logstash configtest
The output will display Configuration OK if there are no errors. Otherwise, check the logstash log to troubleshoot problems.
Next, restart logstash service and enable logstash service to run automatically at bootup:
sudo /etc/init.d/logstash restart
sudo update-rc.d logstash defaults
Installing Kibana
To install Kibana, you will need to add Elastic's package source list to apt.
You can create kibana source list file with the following command:
sudo echo "deb http://packages.elastic.co/kibana/4.4/debian stable main" | sudo tee -a /etc/apt/sources.list.d/kibana-4.4.x.list
Next, update the apt repository with the following command:
sudo apt-get update
Finally, install Kabana by running the following command:
sudo apt-get -y install kibana
Once kibana is installed, you will need to configure kibana. You can do this by editing it's configuration file:
sudo nano /opt/kibana/config/kibana.yml
Change the following line:
server.port: 5601
server.host: localhost
Now, start the kibana service and enable it to start at boot:
sudo /etc/init.d/kibana start
sudo update-rc.d kibana defaults
You can verify whether kibana is running or not with the following command:
netstat –pltn
Once kibana is installed, you will need to download sample Kibana dashboards and Beats index patterns. You can download the sample dashboard with the following command:
curl -L -O https://download.elastic.co/beats/dashboards/beats-dashboards-1.1.0.zip
Once download is complete, unzip the downloaded file with the following command:
unzip beats-dashboards-1.1.0.zip
Now, load the sample dashboards, visualizations and Beats index patterns into Elasticsearch by running the following command:
cd beats-dashboards-1.1.0
./load.sh
You will find the following index patterns in the the kibana dashboard:
packetbeat-*
topbeat-*
filebeat-*
winlogbeat-*
Here, we will use only filebeat to forward logs to Elasticsearch, so we will load a filebeat index template into the elasticsearch.
To do this, download the filebeat index template.
curl -O https://gist.githubusercontent.com/thisismitch/3429023e8438cc25b86c/raw/d8c479e2a1adcea8b1fe86570e42abab0f10f364/filebeat-index-template.json
Now load the following template by running the following command:
curl -XPUT 'http://localhost:9200/_template/filebeat?pretty' -d@filebeat-index-template.json
If the template loaded properly, you should see the following output:
{
"acknowledged" : true
}
Installing Nginx
ou will also need to install Nginx to set up a reverse proxy to allow
To install Nginx, run the following command:
sudo apt-get install nginx
You will also need to install apache2-utils for htpasswd utility:
sudo apt-get install apache2-utils
Now, Create admin user to access Kibana web interface using htpasswd utility:
sudo htpasswd -c /etc/nginx/htpasswd.users admin
Enter password as you wish, you will need this password to access Kibana web interface.
Next, open Nginx default configuration file:
sudo nano /etc/nginx/sites-available/default
Delete all the lines and add the following lines:
server {
listen 80;
server_name 192.168.1.7;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Save and exit the file. Nginx now directs your server's traffic to the Kibana server, which is listening on localhost:5601. Now restart Nginx service and enable it to start at boot:
sudo systemctl start nginx
sudo systemctl enable nginx
Now, Kibana is accessible via the public IP address of your ELK server. The ELK server is now ready to receive filebeat data, now it's time to set up Filebeat on each client server.
Setup Filebeat on the Client Server
You will also need to setup filebeat on each Ubuntu server that you want to send logs to Logstash on your ELK Server.
Before setting up filebeat on the client server, you will need to copy the SSL certificate from ELK server to your client server.
On the ELK server, run the following command to copy SSL certificate to client server:
scp /etc/pki/tls/certs/filebeat.crt user@client-server-ip:/tmp/
Where user is the username of the client server and client-server-ip is the IP address of the client server
Now, on client server, copy ELK server's SSL certificate into appropriate location:
First, create directory structure for SSL certificate:
sudo mkdir -p /etc/pki/tls/certs/
Then, copy certificate into it:
sudo cp /tmp/filebeat.crt /etc/pki/tls/certs/
Now, it's time to install the filebeat package on the client server.
To install filebeat, you will need to create source list for filebeat, you can do this with the following command:
echo "deb https://packages.elastic.co/beats/apt stable main" | sudo tee -a /etc/apt/sources.list.d/beats.list
Then, add the GPG key with the following command:
wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
Next, update the repository with the following command:
sudo apt-get update
Finally install filebeat by running the following command:
sudo apt-get install filebeat
Once filebeat is installed, start filebeat service and enable it to start at boot:
sudo /etc/init.d/filebeat start sudo update-rc.d filebeat defaults
Next, you will need to configure Filebeat to connect to Logstash on our ELK Server. You can do this by editing the Filebeat configuration file located at /etc/filebeat/filebeat.yml.
sudo nano /etc/filebeat/filebeat.yml
Change the file as shown below:
filebeat:
prospectors:
-
paths:
- /var/log/auth.log
- /var/log/syslog
# - /var/log/*.loginput_type: logdocument_type: syslogregistry_file: /var/lib/filebeat/registryoutput:
logstash:
hosts: ["192.168.1.7:5044"]
bulk_max_size: 1024tls:
certificate_authorities: ["/etc/pki/tls/certs/filebeat.crt"]shipper:
logging:
files:
rotateeverybytes: 10485760 # = 10MB
Save the file and restart filebeat service:
sudo /etc/init.d/filebeat restart
Now Filebeat is sending syslog and auth.log to Logstash on your ELK server.
Once everything is up-to-date, you will need to test whether Filebeat on your client server should be shipping your logs to Logstash on your ELK server.
To do this, run the following command on your ELK server:
curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'
You can also test filebeat by running the following command on Client Server:
sudo filebeat -c /etc/filebeat/filebeat.yml -e –v
Allow ELK Through Your Firewall
Next, you will need to configure your firewall to allow traffic to the following ports. You can do this by running the following command:
sudo iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 5601 -j ACCEPT sudo iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 9200 -j ACCEPT sudo iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT sudo iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 5044 -j ACCEPT
Now, save the iptables rules with the following command:
sudo service iptables save
Finally, restart iptables service with the following command:
sudo service iptables restart
Access the Kibana Web Interface
When everything is up-to-date, it's time to access the Kibana web interface.
On the client computer, open your web browser and type the URL http://your-elk-server-ip, Enter "kibana" credentials that you have created earlier, you will be redirected to Kibana welcome page.
Then, click filebeat* in the top left sidebar, and click on the status of the ELK server. You should see the status of the ELK server.
Top comments (0)