DEV Community

Manuel Portillo
Manuel Portillo

Posted on

Watching Vault's Audit Logs Using FluentD

In this post I'm going to explain how you can capture audit logs from a HashiCorp Vault instance into a Fluentd setup running on Docker, this is a scenario where someone needs to troubleshoot specific transactions happening on a vault server with heavy live traffic, which makes a little bit difficult to spot specific known values since all the sensitive information sent to the audit devices in vault is hashed.

Vault allows you to set up multiple audit devices, these "devices" are basically destination (currently: file, syslog and socket are supported) for detailed logging of operations processed by vault, since audit is a key component on a security product if you enable audit devices on a running vault setup, all the operations processed by the server will wait until at least one of the audit devices finished the processing of the entry, so you have to be careful when enabling audit devices that can be unavailable for long periods of time or that might take much time to process the logs, since this will slow down your operations in vault, in the case you only have that device enabled.

For security reasons all sensitive information which is part of these log entries is hashed with a salt using HMAC-SHA256. The fields which values are hashed on a typical log are:

Field Description
client_token This is the token that the client uses to authenticate with Vault
accessor An identifier or alias for the token
value This field can contain the actual secret being sent as a response

An example of a couple of audit log entries look like this:

{"time":"2019-01-08T06:43:50.1574193Z","type":"request","auth":{"client_token":"hmac-sha256:ccef9257d8853c204ec5bb8a79af25863ad7a8e029093bca5a872d5f7bdee54e","accessor":"hmac-sha256:4340a68ddfefce45d8fb31e3be6f779ca8202cb6d7def6b6f302e8a563ea05fc","display_name":"root","policies":["root"],"token_policies":["root"],"metadata":null,"entity_id":""},"request":{"id":"d999de96-943f-b547-faf7-842c52c1150f","operation":"read","client_token":"hmac-sha256:ccef9257d8853c204ec5bb8a79af25863ad7a8e029093bca5a872d5f7bdee54e","client_token_accessor":"hmac-sha256:4340a68ddfefce45d8fb31e3be6f779ca8202cb6d7def6b6f302e8a563ea05fc","namespace":{"id":"root","path":""},"path":"secret/password","data":null,"policy_override":false,"remote_address":"172.17.0.1","wrap_ttl":0,"headers":{}},"error":""}
{"time":"2019-01-08T06:43:50.1616533Z","type":"response","auth":{"client_token":"hmac-sha256:ccef9257d8853c204ec5bb8a79af25863ad7a8e029093bca5a872d5f7bdee54e","accessor":"hmac-sha256:4340a68ddfefce45d8fb31e3be6f779ca8202cb6d7def6b6f302e8a563ea05fc","display_name":"root","policies":["root"],"token_policies":["root"],"metadata":null,"entity_id":""},"request":{"id":"d999de96-943f-b547-faf7-842c52c1150f","operation":"read","client_token":"hmac-sha256:ccef9257d8853c204ec5bb8a79af25863ad7a8e029093bca5a872d5f7bdee54e","client_token_accessor":"hmac-sha256:4340a68ddfefce45d8fb31e3be6f779ca8202cb6d7def6b6f302e8a563ea05fc","namespace":{"id":"root","path":""},"path":"secret/password","data":null,"policy_override":false,"remote_address":"172.17.0.1","wrap_ttl":0,"headers":{}},"response":{"secret":{"lease_id":""},"data":{"value":"hmac-sha256:420ba9936bd947e90f357f174abbb59bb7d4c2747648afc6498491f1a12dc773"}},"error":""}

Since the way to secure sensitive data here is by hashing it, there is no way to reverse the process and show the plaintext values of all these hashed fields, however, lets say you know the actual value of the client token and the value of the secret being retrieved on this transaction, the objective would be to obtain something like the following during a live log tracing session, where you might see hundreds of transactions flowing that are not relevant to the client or the value that you need:

{"type":"request","auth":{"client_token":"4VpEnQtil0Sd3GHkNNL25EGK","accessor":"hmac-sha256:e78fdc26ebab94c50101f2805ccb4526b9591a17d4c8e4949c0cc98f456d2288","display_name":"root","policies":["root"],"token_policies":["root"],"metadata":null,"entity_id":""},"request":{"id":"d999de96-943f-b547-faf7-842c52c1150f","operation":"read","client_token":"4VpEnQtil0Sd3GHkNNL25EGK","client_token_accessor":"hmac-sha256:e78fdc26ebab94c50101f2805ccb4526b9591a17d4c8e4949c0cc98f456d2288","namespace":{"id":"root","path":""},"path":"secret/password","data":null,"policy_override":false,"remote_address":"172.17.0.1","wrap_ttl":0,"headers":{}},"error":""}
{"type":"response","auth":{"client_token":"4VpEnQtil0Sd3GHkNNL25EGK","accessor":"hmac-sha256:e78fdc26ebab94c50101f2805ccb4526b9591a17d4c8e4949c0cc98f456d2288","display_name":"root","policies":["root"],"token_policies":["root"],"metadata":null,"entity_id":""},"request":{"id":"d999de96-943f-b547-faf7-842c52c1150f","operation":"read","client_token":"4VpEnQtil0Sd3GHkNNL25EGK","client_token_accessor":"hmac-sha256:e78fdc26ebab94c50101f2805ccb4526b9591a17d4c8e4949c0cc98f456d2288","namespace":{"id":"root","path":""},"path":"secret/password","data":null,"policy_override":false,"remote_address":"172.17.0.1","wrap_ttl":0,"headers":{}},"response":{"secret":{"lease_id":""},"data":{"value":"Ayuda2"}},"error":""}

The Setup

First, you should have a running instance of Vault, for demonstration purposes I will use a docker image that I have for local testing, the most relevant specifications about this image would be that:

  • It runs Vault 0.11.5
  • It uses file storage backend
  • Uses TLS with a self-signed certificate

But technically you could do this on any kind of vault setup, as long as you have access to set up audit devices.

Then we are going to create a Docker image with FluentD and a plugin that I created, basically the plugin, at startup, will connect to the given vault server and request the hash for each of the provided strings, then using this information, for every transaction that arrives at FluentD (vault audit logs), it will parse all the fields with hashed data and for the ones where the hash matches with any of the ones calculated at startup, it will replace the hash with the plaintext value. Then all the transactions will be just sent to stdout, this way, anyone tailing the logs of the docker container will see the plaintext strings. In general, the key elements of the Docker image are:

  • It's based on Fluend's 1.3-onbuild image
  • Uses the TCP input plugin and its reachable by the vault instance
  • It will have the configuration for the filter plugin required to communicate with vault along with the plaintext strings that you want to see in the stdout
  • Uses stdout output plugin, so we can watch live entries
  • Since FluentD has multiple plugins (input, filtering, forwarding, output), should be easy to extend and add any extra functionality

To create the image we only need a folder with 3 things on it:

  1. The self-signed certificate to be used as a CA certificate file.
  2. An empty plugins directory (required by the on-build image, but we are not using this method to install the plugin)
  3. A file named: fluent.conf which will have the configuration to set up the listener, the filter and the output plugins with the following content:
<source>
  @type  tcp
  <parse>
    @type json
  </parse>
  tag tcp.events
  port  24224
</source>

<filter tcp.events>
  @type vault_decode
  keywords Ayuda2, 4VpEnQtil0Sd3GHkNNL25EGK #The first string is one of the secret values and the second one is an app token that I'm looking to find in the logs.
  vaultaddr https://172.17.0.4:8200 #This is the vault server that I'm playing with
  vaulttoken 1OIEBC7cLA87ddIJNePEA1U3 # This is a token that has access to call /sys/audit-hash/socket endpoint
</filter>

<filter tcp.events>
  @type stdout
</filter>

Once these 3 things are in a folder, then we just need the Dockerfile that looks like this:

FROM fluent/fluentd:v1.3-onbuild-1
LABEL maintainer="manuel220@yahoo.com"

RUN mkdir /fluentd/etc/certs

RUN apk add --no-cache --update --virtual .build-deps \
    sudo build-base ruby-dev git

RUN git clone https://github.com/manuel220x/fluent-plugin-filter-vaultaudit.git \
    && cd fluent-plugin-filter-vaultaudit && gem build fluent-plugin-filter-vault-decode.gemspec \
    && gem install fluent-plugin-filter-vaultaudit-*

RUN sudo gem sources --clear-all \
    && apk del .build-deps

COPY cert.crt /fluentd/etc/certs/
COPY fluent.conf /fluentd/etc/


EXPOSE 24224 

And build the image:

docker build -t fluentd:catchingvault .

With the image ready, now you can just start your container:

docker run -d --name catchingVaultLogs -p 24224:24224 fluentd:catchingvault

And start tailing your logs:

docker logs -f catchingVaultLogs

At this point your FluentD instance is ready to capture and parse logs, now lets just add an audit device into your vault server under the /socket path (this is the default on the FluentD plugin and also in vault for socket audit devices, but you can change that), using vault's cli, something like the following should work, being 172.17.0.3 the IP address where the vault server can reach the FluentD's container:

vault audit enable socket address=172.17.0.3:24224 socket_type=tcp

My Vault instance has 2 audit devices, from which one of those is the one running our FluentD setup, in the following screenshot I'm running a read operation and tailing both audit devices, so you can see the different outputs.

alt text
alt text
alt text

As you can see the latest one, it shows the plaintext values that were found at the moment they were happening, as I mentioned before, from here you can extend this to add some grep expressions or even include other plugins to do cool stuff with your logs.

You can take a look at the code of the plugin here:

https://github.com/manuel220x/fluent-plugin-filter-vaultaudit

Any feedback either here on in Github is welcome.

Top comments (1)

Collapse
 
v6 profile image
πŸ¦„N BπŸ›‘

// , Is there a way to make the FluentD output a bit more readable?

I had to squint at some of the screenshots, but maybe it's just because I'm getting on in years.