DEV Community

loading...
Cover image for Securing Elastic Stack

Securing Elastic Stack

thehoodsdev profile image Kiya Abdulahi ・7 min read

Introduction

This is part two of an Elastic Stack series I'm writing. Be sure to checkout part one to get your dockerized instance of Elastic Search running.

X-Pack Security

Now that we have our dockerized Elastic Stack running, we need to add some security. If we deploy this to production as is, both our elasticsearch and kibana containers will be accessible to anyone via ports 9200 and 5601 rendering our data insecure 😳 This is where X-Pack comes in!

X-Pack is an Elastic Stack extension that provides security. The best thing about X-Pack is that it's free (some features) and comes installed. We will take advantage of X-Packs Encrypted communications and role-based authentication.

Transport Layer Security

The first thing we are going to do is encrypt all traffic to, from, and within our Elasticsearch cluster by enabling Transport Layer Security (TLS).

The TLS protocol aims primarily to provide privacy and data integrity between two or more communicating computer applications. — Wikipedia

You may be more familiar with Secure Sockets Layer (SSL) which does the same but is now deprecated due to it not being sufficiently secure. More on that here.

In order to enable TLS, we will first need to use elasticsearch-certutil to create:

  • A Certificate Authority (CA):

In cryptography, a certificate authority or certification authority (CA) is an entity that issues digital certificates. - Wikipedia

  • A certificate:

A digital certificate certifies the ownership of a public key by the named subject of the certificate. — Wikipedia

elasticsearch-certutil

In order to generate our CA and certificate, we need to get inside our Elasticsearch docker container. While our docker containers are running, open up a new tab of your favorite terminal, cd into the root directory of the elastic-stack project, and run the following:

docker-compose exec elasticsearch bash
Enter fullscreen mode Exit fullscreen mode

This allows us to get inside our elasticsearch docker container. This is where we will generate our CA by executing the following:

bin/elasticsearch-certutil ca
Enter fullscreen mode Exit fullscreen mode

We will see 5 informational warnings. These can be ignored. Under the warnings, we will see a message from elasticsearch-certutil regarding what we are trying to execute. It will also ask us to:

Please enter the desired output file [elastic-stack-ca.p12]:
Enter password for elastic-stack-ca.p12 :

Go ahead and press enter for both with no values.

We will now create our certificate by executing the following:

bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
Enter fullscreen mode Exit fullscreen mode

We will again see several informational warnings alongside a message from elasticsearch-certutil letting us know what we are trying to do. It will ask us to:

Enter password for CA (elastic-stack-ca.p12):
Please enter the desired output file [elastic-certificates.p12]:
Please enter the desired output file [elastic-certificates.p12]:

Go ahead and press enter for all three with no values.

Boom! Just like that we've successfully created a CA with a certificate. If we now ls in our elasticsearch container we should see the following two files:

  • elastic-stack-ca.p12:

This file is a PKCS#12 key-store that contains the public certificate for your CA and the private key that is used to sign the certificates for each node. - elastic

  • elastic-certificates.p12:

A single PKCS#12 key-store that includes the node certificate, node key, and CA certificate. - elastic

Moving the CA and Certificate

Right now the CA and certificate live in our elasticsearch container. If we bring down this container, it will no longer exist. We do not want this. We need to move the files from our elasticsearch container to our host machine. We can achieve this by doing the following.

First, we need to exit out of the elasticsearch container (not bringing it down) by executing the following in the terminal tab we created the CA and certificate:

control d
Enter fullscreen mode Exit fullscreen mode

Next, we need to run the following command in the root of our elastic-stack directory:

docker cp "$(docker-compose ps -q elasticsearch)":/usr/share/elasticsearch/elastic-certificates.p12 .
Enter fullscreen mode Exit fullscreen mode

and

docker cp "$(docker-compose ps -q elasticsearch)":/usr/share/elasticsearch/elastic-stack-ca.p12 .
Enter fullscreen mode Exit fullscreen mode

So what just happened?

  • docker cp copies the contents of source_path to the destination_path
  • Our source path in this case is in our elasticsearch container and we reference that with:
    • $(docker-compose ps -q elasticsearch)":/usr/share/elasticsearch/elastic-stack-ca.p12
  • Our destination path is the root of our elastic-stack directory and we reference that with just the period at the end .
  • Learn more about docker cp here

Bind mount certificate from host to container

Bind mounts allows a file or directory to be referenced by its relative path on the host machine into a docker container. Not to be confused by volumes which are fully managed by docker. To learn more, checkout this page and this page along with the image below:

Alt Text

To bind mount the certificate, let's add the following line in our docker-compose-yml file under our volumes section of elasticsearch:

- ./elastic-certificates.p12:/usr/share/elasticsearch/config/elastic-certificates.p12
Enter fullscreen mode Exit fullscreen mode

Dedicated Volumes

An important step is to create dedicated volumes for our docker containers. This allows data created within our elasticsearch container to not perish after we bring our container down. Let's do this by running the following in our elastic-stack directory:

mkdir data-volumes && cd data-volumes && mkdir elasticsearch && cd ..
Enter fullscreen mode Exit fullscreen mode

Next, let's bind mount our newly created data-volumes directory from our host machine to our elasticsearch container by adding the following in our docker-compose.yml file under volumes of elasticsearch:

- ./data-volumes/elasticsearch:/usr/share/elasticsearch/data
Enter fullscreen mode Exit fullscreen mode

Enabling X-Pack

Now that we have our CA and cert in place, let's add the following to our elasticsearch.yml file:

xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
Enter fullscreen mode Exit fullscreen mode

Generating passwords for built-in Elastic users

As a part of X-Pack, Elastic has built-in user credentials to help us get up and running. These users have a fixed set of privileges and cannot be authenticated until their passwords have been set. To learn more, check this out.

Let's first start by restarting our docker containers so the X-Pack settings we enabled can take place. To do this, go to where the docker containers are running and hit control c.

Next, let's run the following to bring our docker containers back up:

docker-compose up
Enter fullscreen mode Exit fullscreen mode

We will see errors in our terminal, specifically with our kibana container stating:

{
    "type": "log",
    "@timestamp": "2020-02-23T02:27:18Z",
    "tags": [
        "warning",
        "plugins",
        "licensing"
    ],
    "pid": 6,
    "message":"License information could not be obtained from Elasticsearch due to [security_exception] missing authentication credentials for REST request"
}
Enter fullscreen mode Exit fullscreen mode

That's because we set up TLS on our elasticsearch container and as a result, our kibana container cannot communicate with it. Luckily, this is the exact behavior we expect and it's what we're going to solve by generating credentials for Elastics built in users!

Let's open up a new terminal window, cd into our elastic-stack directory and enter our elasticsearch container once again with:

docker-compose exec elasticsearch bash
Enter fullscreen mode Exit fullscreen mode

Next, let's run the following to generate the passwords for Elastics built in users:

bin/elasticsearch-setup-passwords auto
Enter fullscreen mode Exit fullscreen mode

We will be asked:

Please confirm that you would like to continue [y/N]

Type in y and hit enter. This will list out the credentials for all the built-in users. Note these down and store them somewhere safe!

Add credentials to kibana.yml

Now that we have the credentials for the built-in users, let's add it to our kibana.yml file so that our kibana container can communicate with our elasticsearch container. Add the following to kibana.yml:

elasticsearch.username: "kibana"
elasticsearch.password: "kibana_password"
xpack.security.encryptionKey: "something_at_least_32_characters" # learn more here: https://www.elastic.co/guide/en/kibana/7.6/security-settings-kb.html
Enter fullscreen mode Exit fullscreen mode

The reason why we are specifying elasticsearch.username and elasticsearch.password but putting in our kibana credentials is because the kibana credentials is specifically built to connect and communicate with Elasticsearch whereas the elastic is a superuser. This is a common gotcha when getting started with the Elastic Stack. To learn more about Elastics built-in roles, check this out

Moment of truth

Let's see if our dockerized instance of Elastic Stack is secured!

First, we need to bring down our running containers so the changes we implemented can take effect. To do this, go to where the docker containers are running and hit control c.

Next, let's run the following to bring our docker containers back up:

docker-compose up
Enter fullscreen mode Exit fullscreen mode

Elasticsearch

  • Go to localhost:9200 and you should see:

Alt Text

  • Login with your elastic credentials that you generated

Kibana

  • Go to localhost:5601 and you should see:

Alt Text

  • Once again, login with your elastic credentials that you generated

You did it!

Your dockerized instance of Elastic stack is now secure!

Alt Text

Up next

This is the second part of an Elastic Stack series I'm writing. Be sure to stay tuned for the following:

  • Shipping logs to our dockerized Elastic stack
  • Querying and visualizing our logs
  • Alerting based off of our logs
  • Deploying our dockerized Elastic Stack to production

If you missed part one of this Elastic Stack series, check it out here.

Discussion (3)

pic
Editor guide
Collapse
aissatouu profile image
Aissatouu

So far I love your series on the Elastic suite in Docker, I look forward to the next articles

Collapse
thehoodsdev profile image
Collapse
ehsansarshar profile image
Ehsan sarshar • Edited

Hi Kiya. after setting all these things up, every things work perfectly in my local macos. but when I push the changes to the vps. elasticsearch crash after I open the kibana. and the error is as follow.
at least on primary shard for index security-7 is unavailable.
just one thing to remind is that I just re-used the same certificate that I generated locally