DEV Community

Shawn Adams for Rockset

Posted on • Originally published at rockset.com on

How We Analyze and Visualize Kubernetes Events in Real Time at Rockset

Authored by Rui Aguiar

Kubernetes at Rockset

At Rockset, we use Kubernetes (k8s) for cluster orchestration. It runs all our production microservices — from our ingest workers to our query-serving tier. In addition to hosting all the production infrastructure, each engineer has their own Kubernetes namespace and dedicated resources that we use to locally deploy and test new versions of code and configuration. This sandboxed environment for development allows us to make software releases confidently multiple times every week. In this blog post, we will explore a tool we built internally that gives us visibility into Kubernetes events, an excellent source of information about the state of the system, which we find useful in troubleshooting the system and understanding its long-term health.

Why We Care About Kubernetes Events

Kubernetes emits events whenever some change occurs in any of the resources that it is managing. These events often contain important metadata about the entity that triggered it, the type of event (Normal, Warning, Error, etc.) and the cause. This data is typically stored in etcd and made available when you run certain kubectl commands.

These events help understand what happened behind the scenes when a particular entity entered a specific state. Another place to see an aggregated list of all events is by accessing all events via kubectl get events.

As can be seen above, this gives us details - the entity that emitted the event, the type/severity of the event, as well as what triggered it. This information is very useful when looking to understand changes that are occurring in the system. One additional use of these events is to understand long-term system performance and reliability. For example, certain node and networking errors that cause pods to restart may not cause service disruptions in a highly available setup but often can be hiding underlying conditions that place the system at increased risk.

In a default Kubernetes setup, the events are persisted into etcd, a key-value store. etcd is optimized for quick strongly consistent lookups, but falls short on its ability to provide analytical abilities over the data. As size grows, etcd also has trouble keeping up and therefore, events get compacted and cleaned up periodically. By default, only the past hour of events is preserved by etcd.

The historical context can be used to understand long-term cluster health, incidents that occurred in the past and the actions taken to mitigate them within Kubernetes, and build accurate post mortems. Though we looked at other monitoring tools for events, we realized that we had an opportunity to use our own product to analyze these events in a way that no other monitoring product could, and use it to construct a visualization of the states of all of our Kubernetes resources.

rockset-kubernetes-1

Overview

To ingest the Kubernetes events, we use an open source tool by Heptio called eventrouter. It reads events from the Kubernetes API server and forwards them to a specified sink. The sink can be anything from Amazon S3 to an arbitrary HTTP endpoint. In order to connect to a Rockset collection, we decided to build a Rockset connector for eventrouter to control the format of the data uploaded to our collection. We contributed this Rockset sink into the upstream eventrouter project. This connector is really simple — it takes all received events and emits them into Rockset. The really cool part is that for ingesting these events, which are JSON payloads that vary across different types of entities, we do not need to build any schema or do structural transformations. We can emit the JSON event as-is into a Rockset collection and query it as though it were a full SQL table. Rockset automatically converts JSON events into SQL tables by first indexing all the json fields using Converged Indexing and then automatically schematizing them via Smart Schemas.

The front-end application is a thin layer over the SQL layer that allows filtering events by namespace and entity type (Pod, Deployment, etc.), and then within those entity types, cluster events by normal/errors. The goal is to have a histogram of these events to visually inspect and understand the state of the cluster over an extended period of time. Of course, what we demonstrate is simply a subset of what could be built - one can imagine much more complex analyses - like analyzing network stability, deployment processes, canarying software releases and even using the event store as a key diagnostic tool to discover correlations between cluster-level alerts and Kubernetes-level changes.

Setup

Before we can begin receiving events from eventrouter into Rockset, we must create a collection in Rockset. This is the collection that all eventrouter events are stored in. You can do this with a free account from https://console.rockset.com/create.

A collection in Rockset can ingest data from a specified source, or can be sent events via the REST API. We’ll use the latter, so, we create a collection that is backed by this Write API. In the Rockset console, we can create such a collection by picking “Write API” as the data source.

rockset-kubernetes-3

When creating the collection, we can pick a retention, say, 120 days or any reasonable amount of time to give us some sense of cluster health. This retention is applied based on a special field in Rockset, _event_time. We will map this field to a specific field within the JSON event payload we will receive from eventrouter called event.lastTimestamp. The transformation function looks like the following:

After creating the collection, we can now set up and use eventrouter to begin receiving Kubernetes events.

rockset-kubernetes-4

Now, receiving events from eventrouter requires one more thing - a Rockset API key. We can use API keys in Rockset to write JSON to a collection, and to make queries. In this case, we create an API key called eventrouter_write from Manage > API keys.

rockset-kubernetes-5

Copy the API key as we will require it in our next step setting up eventrouter to send events into the Rockset collection we just set up. You can set up eventrouter by cloning the eventrouter repository and edit the YAML file yaml/deployment.yaml to look like the following:

You can substitute the <API_KEY> with the Rockset API key we just created in the previous step. Now, we’re ready! Run kubectl apply -f yaml/deployment.yaml, and eventrouter can start watching and forwarding events right away. Looking at the collection within Rockset, you should start seeing events flowing in and being made available as a SQL table. We can query it as shown below from the Rockset console and get a sense of some of the events flowing in. We can run full SQL over it - including all types of filters, joins, etc.

rockset-kubernetes-6

Querying Data

We can now start asking some interesting questions from our cluster and get an understanding of cluster health. One question that we wanted to ask was - how often are we deploying new images into production. We operated on a strict release schedule, but there are times when we rollout and rollback images.

The above query deals with our deployments, which in turn create replicasets and finds the last date on which we deployed a particular image.

This excerpt of images and pods, with timestamp, tells us a lot about the last few deploys and when they occurred. Plotting this on a chart would tell us about how consistent we have been with our deploys and how healthy our deployment practices are.

Now, moving on to performance of the cluster itself, running our own hand-rolled Kubernetes cluster means we get a lot of control over upgrades and the system setup but it is worth seeing when nodes may have been lost/network partitioned causing them to be marked as unready. The clustering of such events can tell us a lot about the stability of the infrastructure.

This query gives us the times the node status went NotReady and we can try and cluster this data using SQL time functions to understand how often issues are occurring over specific buckets of time.

We can additionally look for pod and container level events like when they get OOMKilled and correlate that with other events happening in the system. Compared to a time series database like prometheus, the power of SQL lets us write and JOIN different types of events to try and piece together different things that occurred around a particular time interval, which may be causal.

For visualizing events, we built a simple tool that uses React that we use internally to look through and do some basic clustering of Kubernetes events and errors occurring in them. We’re releasing this dashboard into open source and would love to see what the community might use this for. There are two main aspects to the visualization of Kubernetes events. First is a high-level overview of the cluster at a per-resource granularity. This allows us to see a realtime event stream from our deployments and pods, and to see at what state every single resource in our Kubernetes system is. There is also an option to filter by namespace - because certain sets of services run in their own namespace, this allows us to drill down into a specific namespace to look at events.

rockset-kubernetes-7

If we are interested in the health and state of any particular resource, each per-resource summary is clickable and opens a page with a detailed overview of the event logs of that resource, with a graph that shows the events and errors over time to provide a holistic picture of how the resource is being managed.

rockset-kubernetes-8

The graph in this visualization has adjustable granularity, and the change in time range allows for viewing the events for a given resource over any specified interval. Hovering over a specific bar on the stacked bar chart allows us to see the types of errors occurring during that time period for helpful over-time analytics of what is occurring to a specific resource. The table of events listed below the graph is sorted by event time and also tells contains the same information as the graph - that is, a chronological overview of all the events that happened to this specific k8s resource. The graph and table are helpful ways to understand why a Kubernetes resource has been failing in the past, and any trends over time that may accompany that failure (for example, if it coincides with the release of a new microservice).

Conclusion

Currently, we are using the real-time visualization of events to investigate our own Kubernetes deployments in both development and production. This tool and data source allows us to see our deployments as they are ongoing without having to wrangle the kubectl interface to see what is broken and why. Additionally, this tool is helpful to get a retrospective look on past incidents. For example - if we spot transient issues, we now have the power to go back in time and take a retrospective look at transient production issues, finding patterns of why it may have occurred, and what we can do to prevent the incident from happening again in the future.

The ability to access historical Kubernetes event logs at fine granularity is a powerful abstraction that provides us at rockset a better understanding of the state of our Kubernetes system than kubectl alone would allow us. This unique data source and visualization allows us to monitor our deployments and resources, as well as look at issues from a historical perspective. We’d love for you to try this, and contribute to it if you find it useful in your own environments!


Link: https://github.com/rockset/recipes/tree/master/k8s-event-visualization

Top comments (0)