DEV Community

Cover image for Kubernetes: Statistics in Your Console
Roman Belshevitz for Otomato

Posted on • Updated on

Kubernetes: Statistics in Your Console

As intended by the author, this article is the first in a series about the community-developed tools we work with at Otomato. And the second one about resource management in K8s (these talks had begun in January). Fortunately, the world of open source does not let us get bored.

Even in the world of containers, you’re still going to have to deal with CPU and memory resources. In the daily work of a DevOps engineer with a Kubernetes cluster, it is often in demand to quickly and with minimal effort obtain statistical information about the resources consumed. And, of course, no one of Ops would want to feel himself like MV RENA's captain or Ever Given's one!

Image description

In general, resource consumption should be closely monitored.

Gurus will say that monitoring systems such as Prometheus/Grafana (and maybe even relying on something like metrics-server subsystem) should be used for these purposes, we'll talk about them some other time. Younger gurus will say, "don't show off and open the Lens application!". But, come on, guys, what if we just want to display what we need on the screen with one console command? Fast, short and clear? But something more sophisticated than kubectl top's output in the same time?

Well, thanks to developers community, there is a tool. Now I will tell you how to do it and with what tool.

A few words about extensibility

As you may know, the Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters. The Kubernetes CLI special interest group added a built-in plugin system to kubectl that allows anyone to add new sub-commands to extend the basic command set it offers. This does not require editing kubectl’s source code or recompiling it.

Any executable file in your PATH that starts with kubectl- can be called with the kubectl command. To try this out, let’s write a very basic plugin called kubectl-hello.

#!/usr/bin/env bash
echo "Congrats ${USER}, you just used a kubectl plugin! 💪"
Enter fullscreen mode Exit fullscreen mode

Make this script executable and add it to your PATH:

chmod +x kubectl-hello
ln -s $PWD/kubectl-hello /usr/local/bin/kubectl-hello
Enter fullscreen mode Exit fullscreen mode

That’s it! You can now use this plugin with kubectl:

kubectl hello
Enter fullscreen mode Exit fullscreen mode

Well, that's pretty much how it works. Let's go further.

Krew: all plugins in one box

Installing plugins manually and keeping track of them can be tedious. Thankfully, kubectl has a plugin manager. The community-driven plugin manager for kubectl is called krew.

For Linux, follow these instructions to set it up on your computer.

This will install the kubectl-krew binary. Notice that krew is itself a kubectl plugin.

The open-source community keeps a list of plugins you can install in the official krew-index repository. Use krew to download the index:

kubectl krew update
Enter fullscreen mode Exit fullscreen mode

You can view the list of available plugins with the search sub-command:

kubectl krew search
Enter fullscreen mode Exit fullscreen mode

Installing a plugin is easy with the install sub-command. To try it out, install the whoami plugin:

kubectl krew install whoami
Enter fullscreen mode Exit fullscreen mode

If you installed a plugin a while ago, you may want to upgrade to a newer version, which could include new features or bug-fixes. Use the upgrade sub-command to upgrade one or all installed plugins. You can upgrade a single plugin with

kubectl krew upgrade whoami
Enter fullscreen mode Exit fullscreen mode

Or upgrade all installed plugins with

kubectl krew upgrade
Enter fullscreen mode Exit fullscreen mode

commands, respectively.

Ok, but what about our resources and statistics?

The reader is right. Let's go back to the question in the title. So, we got a tool, let's sharpen and use it, now!

This plugin we're interested in and can be installed with our krew is kube-capacity. It is a simple CLI written by Rob Scott from Google that provides an overview of the resource requests, limits, and utilization in a Kubernetes cluster. It attempts to combine the best parts of the output from kubectl top and kubectl describe into an easy-to-use CLI focused on cluster resources. As a result, it can save you a lot of commands to figure out how a node is being used and which pods might be using more resources than they should be.

The installation process is easy:

kubectl krew install resource-capacity
Enter fullscreen mode Exit fullscreen mode

Image description

Use cases

The simple resource-capacity command with kubectl would return the CPU requests and limits and memory requests and limits of each Node available in the cluster.

You can use the --sort cpu.limit flag to sort by the CPU limit. There are more sorts available, we will see next.

$ kubectl resource-capacity --sort cpu.limit
NODE                            CPU REQUESTS   CPU LIMITS    MEMORY REQUESTS   MEMORY LIMITS
*                               5120m (65%)    5950m (75%)   5815Mi (19%)      6690Mi (22%)
ip-192-168-21-48.ec2.internal   2760m (70%)    3275m (83%)   2995Mi (20%)      3645Mi (24%)
ip-192-168-18-83.ec2.internal   2360m (60%)    2675m (68%)   2820Mi (19%)      3045Mi (20%)
Enter fullscreen mode Exit fullscreen mode

By default, this would not return the Utilization data for each node.

If you need to see utilization report for each node, you need to use --util flag:

$ kubectl resource-capacity --sort cpu.limit --util
NODE                            CPU REQUESTS   CPU LIMITS    CPU UTIL    MEMORY REQUESTS   MEMORY LIMITS   MEMORY UTIL
*                               5120m (65%)    5950m (75%)   581m (7%)   5815Mi (19%)      6690Mi (22%)    6511Mi (22%)
ip-192-168-21-48.ec2.internal   2760m (70%)    3275m (83%)   257m (6%)   2995Mi (20%)      3645Mi (24%)    2708Mi (18%)
ip-192-168-18-83.ec2.internal   2360m (60%)    2675m (68%)   324m (8%)   2820Mi (19%)      3045Mi (20%)    3803Mi (26%)
Enter fullscreen mode Exit fullscreen mode

What if we need a more customized report?

As we have seen above, kubectl resource-capacity plugin supports sort by fields. The --sort flag supports the following values: cpu.util, cpu.request, cpu.limit, mem.util, mem.request, mem.limit, name.

If no flag applied, by default it is sorted in ascending order by the name.

Moreover, adding --pods flag to your command will dump you all the similar stats, including running pods. The sorting order will be preserved. We can even further break it down to finer detail such as containers in each pods by adding --containers flag. The whole command will look like this:

kubectl resource-capacity --sort cpu.util --util --pods --containers
Enter fullscreen mode Exit fullscreen mode

Is there any way to summarize values for namespaces?

It is a very good question. To get the CPU and memory usage of the namespace with kube-capacity plugin, use the following command:

kubectl resource-capacity -n kube-system -p -c
Enter fullscreen mode Exit fullscreen mode

or

kubectl resource-capacity -n kube-system --pods --containers
Enter fullscreen mode Exit fullscreen mode

What if I would like to get YAML output for using it in automation frameworks?

The described kube-capacity plugin is configured to give output as a table by default, but the output can be customized into other formats too. The supported output formats are also YAML and JSON. You can define the output by using the flag -o or --output.

I don't want to be a captain, I want to be a sea pilot!

Every Ops professional should be inquisitive, no less than a Dev one! Yes, there is something to strive for: with the help of the kube-capacity plugin, you can, figuratively speaking, not only find out the reason for the overload of the ship, but also what maps it used to plow the seas!

So, what was in the pilot's briefcase? Let's see. Labels will help us. We can use either labels to attach metadata to Kubernetes objects. Labels can be used to select objects and to find collections of objects that satisfy certain conditions.

With kube-capacity plugin, you can filter the nodes based on their label and get the CPU and memory usage of these specific nodes by using --node-labels flag. Also, based on the labels, we can "pull" useful information from our cluster, and use it to understand what went wrong, where we made a mistake, and what parameters should be modified or should be improved.

In our case, we want to filter the nodes with this Label node.kubernetes.io/instance-type=m5.xlarge and get the resource capacity and usage of those nodes which are hosted on such AWS instances. Let's see the sample output:

$ kubectl resource-capacity --node-labels node.kubernetes.io/instance-type=m5.xlarge
NODE                            CPU REQUESTS   CPU LIMITS    MEMORY REQUESTS   MEMORY LIMITS
*                               5120m (65%)    5950m (75%)   5815Mi (19%)      6690Mi (22%)
ip-192-168-21-48.ec2.internal   2660m (67%)    3125m (79%)   2895Mi (19%)      3495Mi (23%)
ip-192-168-18-83.ec2.internal   2460m (62%)    2825m (72%)   2920Mi (20%)      3195Mi (21%)
Enter fullscreen mode Exit fullscreen mode

There are also some infrastructure specific or cloud topology specific Labels which may be useful to construct queries during your investigations. Among them are: kubernetes.io/hostname, topology.kubernetes.io/zone, topology.kubernetes.io/region, eks.amazonaws.com/capacityType (ON_DEMAND | SPOT, AWS context) and others.

To display the pod count of each node and the whole cluster, you can pass --pod-count argument:

$ kubectl resource-capacity --pod-count
NODE                            CPU REQUESTS   CPU LIMITS    MEMORY REQUESTS   MEMORY LIMITS   POD COUNT
*                               5120m (65%)    5950m (75%)   5815Mi (19%)      6690Mi (22%)    61/116
ip-192-168-21-48.ec2.internal   2660m (67%)    3125m (79%)   2895Mi (19%)      3495Mi (23%)    33/58
ip-192-168-18-83.ec2.internal   2460m (62%)    2825m (72%)   2920Mi (20%)      3195Mi (21%)    28/58
Enter fullscreen mode Exit fullscreen mode

A similiar but less informative result may be dumped even without using any plugins. You may run:

kubectl get po -o json --all-namespaces | \
  jq '.items | group_by(.spec.nodeName) | map({"nodeName": .[0].spec.nodeName, "count": length}) | sort_by(.count)'
Enter fullscreen mode Exit fullscreen mode

In general, as you may clearify, both sea pilots and captains will be able to defend the honor of the uniform. And if your toolkit is limited, don't worry! There are, though more complex, but also quite working approaches.

However, this is a topic for a separate article. Like another topic: imagine, before you can query the Kubernetes Metrics API or run kubectl top commands to retrieve so called metrics, you’ll need to ensure that Metrics Server is deployed to your cluster. The author has something to work on in the future.

Fast and robust clusters to you!

🇩🇪🇫🇷🇮🇱 The author is thankful to Sarav Ak, Tapan Hegde, Guillaume Vincent and Eldad Assis for their watchfulness in solving DevOps tasks. Pic source: Platform9.

Image description

Top comments (0)