DEV Community

Ryan Gough
Ryan Gough

Posted on • Originally published at jysk.tech on

Inspect Deployment Network Traffic in Kubernetes

Deep Dive — Inspect Deployment Network Traffic in Kubernetes

In today’s intricate world of microservices and distributed applications, understanding and monitoring network traffic is paramount for maintaining security, performance, and seamless user experiences. Kubernetes, the de facto orchestrator for containerized applications, has transformed the way developers deploy and manage applications at scale. But with this power comes complexity, especially when it comes to understanding how network packets flow within and between pods, services, and external entities. How does one tap into this myriad web of network interactions?

In this blog post, we will delve into the essentials of network traffic within Kubernetes clusters, explore tools and techniques for packet capturing, and unearth best practices to ensure our efforts are both efficient and safe. Whether you’re a system administrator seeking to troubleshoot a pesky network anomaly, a security professional hunting for signs of malicious activity, or a developer eager to optimize communication between services, this guide promises to be a valuable ally on your journey.

There are a few tools covered in this post, whilst troubleshooting network traffic is not limited to these — I have personally found these to be very helpful in the past few years. Some are easier than others, some require more in-depth knowledge of specific tools. In my personal opinion, one should choose the right tool for the job, and it’s nice to have a toolbox with many to choose from.

Hubble eBPF

Project: https://github.com/cilium/hubble

Amidst the growing buzz around Cilium, Hubble emerges into the spotlight. A rising number of users are transitioning to the Cilium CNI, drawn by its exceptional flexibility and comprehensive toolset.

Cilium Hubble is a powerful observability platform tailored for the cloud-native age. Building on Cilium’s robust networking, security, and load balancing capabilities, Hubble provides real-time visibility into the network traffic of Kubernetes clusters, ensuring developers and operators can seamlessly monitor, troubleshoot, and secure their applications. In a world where microservices are never ending, Hubble stands as an indispensable lens if one is running Cilium, offering a clear view of the mesh of communications within modern environments.

Hubble taps into the rich data provided by eBPF to deliver real-time insights into network traffic, application interactions, and potential security violations within Kubernetes clusters. By leveraging eBPF’s efficient and granular packet filtering capabilities, Hubble offers unparalleled visibility into microservices communication without imposing significant overhead. It captures metadata for every network flow, enabling teams to troubleshoot issues, monitor application interactions, and ensure robust security postures in their cloud-native applications.

eBPF is a subject on it’s own, but this is touched on from a very nice presentation by Sebastian Wicki, Isovalent. Where he explores the foundations of eBPF and the usecases. along with Hubble.

https://medium.com/media/c51ef179cf3784a72ddca5989d6276da/href

Setup & Installation

First thing is to enable hubble. If you have not done already, we need to run cilium hubble enable --ui consult the documentation for more information and setup. One we have hubble enabled, we can then install the hubble cli.

When we are ready, a quick check:

jysk@jysk:~$ cilium status
    /¯¯\
 /¯¯\__/¯¯\ Cilium: OK
 \ __/¯¯\__ / Operator: OK
 /¯¯\__/¯¯\ Envoy DaemonSet: disabled (using embedded mode)
 \ __/¯¯\__ / Hubble Relay: OK
    \__/ ClusterMesh: disabled

DaemonSet cilium Desired: 1, Ready: 1/1, Available: 1/1
Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1
Deployment hubble-relay Desired: 1, Ready: 1/1, Available: 1/1
Deployment hubble-ui Desired: 1, Ready: 1/1, Available: 1/1
Containers: cilium-operator Running: 1
                  hubble-relay Running: 1
                  cilium Running: 1
                  hubble-ui Running: 1

jysk@jysk:~$ hubble list node
NAME STATUS AGE FLOWS/S CURRENT/MAX-FLOWS
jysk Connected 14m53s 76.66 4095/4095 (100.00%)
Enter fullscreen mode Exit fullscreen mode

From here, we can immediatley start to use hubble observe to inspect our deployments. For this, i will be running a troubleshooting container image called netshoot which i will actually like to demo a little later in this post. For now though, we're going to enter our container and issue a basic ping.

# Get running netshoot pod,deployment
jysk@jysk:~$ kubectl get pod,deployment -n jysk-system
NAME READY STATUS RESTARTS AGE
pod/netshoot-b78cd67fb-w5np7 1/1 Running 0 10h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/netshoot 1/1 1 1 10h

# issue a basic ping
jysk@jysk:~$ kubectl exec pod/netshoot-b78cd67fb-w5np7 -n jysk-system -- ping 10.0.30.1
PING 10.0.30.1 (10.0.30.1) 56(84) bytes of data.
64 bytes from 10.0.30.1: icmp_seq=1 ttl=253 time=0.563 ms
64 bytes from 10.0.30.1: icmp_seq=2 ttl=253 time=0.497 ms
Enter fullscreen mode Exit fullscreen mode

In a second terminal we can use hubble observe, for basic packet tracing.

jysk@jysk:~$ hubble observe --pod jysk-system/netshoot-b78cd67fb-w5np7 --protocol icmp
Aug 21 17:38:40.178: jysk-system/netshoot-b78cd67fb-w5np7 (ID:21977) -> 10.0.30.1 (world) to-stack FORWARDED (ICMPv4 EchoRequest)
Aug 21 17:38:40.178: jysk-system/netshoot-b78cd67fb-w5np7 (ID:21977) <- 10.0.30.1 (world) to-endpoint FORWARDED (ICMPv4 EchoReply)
Aug 21 17:38:46.303: jysk-system/netshoot-b78cd67fb-w5np7 (ID:21977) -> 10.0.30.1 (world) to-stack FORWARDED (ICMPv4 EchoRequest)
Aug 21 17:38:46.304: jysk-system/netshoot-b78cd67fb-w5np7 (ID:21977) <- 10.0.30.1 (world) to-endpoint FORWARDED (ICMPv4 EchoReply)
Aug 21 17:38:52.447: jysk-system/netshoot-b78cd67fb-w5np7 (ID:21977) -> 10.0.30.1 (world) to-stack FORWARDED (ICMPv4 EchoRequest)
Aug 21 17:38:52.448: jysk-system/netshoot-b78cd67fb-w5np7 (ID:21977) <- 10.0.30.1 (world) to-endpoint FORWARDED (ICMPv4 EchoReply)
Aug 21 17:38:58.591: jysk-system/netshoot-b78cd67fb-w5np7 (ID:21977) -> 10.0.30.1 (world) to-stack FORWARDED (ICMPv4 EchoRequest)
Aug 21 17:38:58.592: jysk-system/netshoot-b78cd67fb-w5np7 (ID:21977) <- 10.0.30.1 (world) to-endpoint FORWARDED (ICMPv4 EchoReply)
Aug 21 17:39:04.736: jysk-system/netshoot-b78cd67fb-w5np7 (ID:21977) <- 10.0.30.1 (world) to-endpoint FORWARDED (ICMPv4 EchoReply)
Aug 21 17:39:04.736: jysk-system/netshoot-b78cd67fb-w5np7 (ID:21977) -> 10.0.30.1 (world) to-stack FORWARDED (ICMPv4 EchoRequest)
Aug 21 17:39:10.847: jysk-system/netshoot-b78cd67fb-w5np7 (ID:21977) -> 10.0.30.1 (world) to-stack FORWARDED (ICMPv4 EchoRequest)
Aug 21 17:39:10.848: jysk-system/netshoot-b78cd67fb-w5np7 (ID:21977) <- 10.0.30.1 (world) to-endpoint FORWARDED (ICMPv4 EchoReply)
Aug 21 17:39:16.991: jysk-system/netshoot-b78cd67fb-w5np7 (ID:21977) -> 10.0.30.1 (world) to-stack FORWARDED (ICMPv4 EchoRequest)
Aug 21 17:39:16.992: jysk-system/netshoot-b78cd67fb-w5np7 (ID:21977) <- 10.0.30.1 (world) to-endpoint FORWARDED (ICMPv4 EchoReply)
Aug 21 17:39:22.111: jysk-system/netshoot-b78cd67fb-w5np7 (ID:21977) -> 10.0.30.1 (world) to-stack FORWARDED (ICMPv4 EchoRequest)
Aug 21 17:39:22.112: jysk-system/netshoot-b78cd67fb-w5np7 (ID:21977) <- 10.0.30.1 (world) to-endpoint FORWARDED (ICMPv4 EchoReply)
Aug 21 17:39:28.223: jysk-system/netshoot-b78cd67fb-w5np7 (ID:21977) -> 10.0.30.1 (world) to-stack FORWARDED (ICMPv4 EchoRequest)
Aug 21 17:39:28.224: jysk-system/netshoot-b78cd67fb-w5np7 (ID:21977) <- 10.0.30.1 (world) to-endpoint FORWARDED (ICMPv4 EchoReply)
Enter fullscreen mode Exit fullscreen mode

This is great! now we can start to see what’s actually “on the wire” from our deployments.

Lets setup a small http server and try to observe some L7 comms.

# Deploy
jysk@jysk:~$ kubectl create deployment nginx --image registry.jysk.com/dockerhub/library/nginx -n jysk-system
deployment.apps/nginx create

# Basic HTTP req. 
jysk@jysk:~$ kubectl exec pod/netshoot-b78cd67fb-w5np7 -n jysk-system -- curl -sI http://nginx
HTTP/1.1 200 OK
Server: nginx/1.25.2
Date: Mon, 21 Aug 2023 18:04:06 GMT

jysk@jysk:~$ hubble observe --pod jysk-system/nginx-7cd6548575-k8768
Aug 21 18:07:33.818: jysk-system/netshoot-b78cd67fb-w5np7 (ID:21977) <> jysk-system/nginx-7cd6548575-k8768:80 (ID:8484) post-xlate-fwd TRANSLATED (TCP)
Aug 21 18:07:33.819: jysk-system/netshoot-b78cd67fb-w5np7:53278 (ID:21977) -> jysk-system/nginx-7cd6548575-k8768:80 (ID:8484) to-endpoint FORWARDED (TCP Flags: SYN)
Aug 21 18:07:33.819: jysk-system/netshoot-b78cd67fb-w5np7:53278 (ID:21977) <- jysk-system/nginx-7cd6548575-k8768:80 (ID:8484) to-endpoint FORWARDED (TCP Flags: SYN, ACK)
Aug 21 18:07:33.819: jysk-system/netshoot-b78cd67fb-w5np7:53278 (ID:21977) -> jysk-system/nginx-7cd6548575-k8768:80 (ID:8484) to-endpoint FORWARDED (TCP Flags: ACK)
Aug 21 18:07:33.819: jysk-system/netshoot-b78cd67fb-w5np7:53278 (ID:21977) <> jysk-system/nginx-7cd6548575-k8768 (ID:8484) pre-xlate-rev TRACED (TCP)
Aug 21 18:07:33.819: jysk-system/nginx-7cd6548575-k8768:80 (ID:8484) <> jysk-system/netshoot-b78cd67fb-w5np7 (ID:21977) pre-xlate-rev TRACED (TCP)
Aug 21 18:07:33.819: jysk-system/nginx-7cd6548575-k8768:80 (ID:8484) <> jysk-system/netshoot-b78cd67fb-w5np7 (ID:21977) pre-xlate-rev TRACED (TCP)
Aug 21 18:07:33.819: jysk-system/netshoot-b78cd67fb-w5np7:53278 (ID:21977) -> jysk-system/nginx-7cd6548575-k8768:80 (ID:8484) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Aug 21 18:07:33.819: jysk-system/netshoot-b78cd67fb-w5np7:53278 (ID:21977) <- jysk-system/nginx-7cd6548575-k8768:80 (ID:8484) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Aug 21 18:07:33.819: jysk-system/netshoot-b78cd67fb-w5np7:53278 (ID:21977) -> jysk-system/nginx-7cd6548575-k8768:80 (ID:8484) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Aug 21 18:07:33.820: jysk-system/netshoot-b78cd67fb-w5np7:53278 (ID:21977) <- jysk-system/nginx-7cd6548575-k8768:80 (ID:8484) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Aug 21 18:07:33.820: jysk-system/netshoot-b78cd67fb-w5np7:53278 (ID:21977) -> jysk-system/nginx-7cd6548575-k8768:80 (ID:8484) to-endpoint FORWARDED (TCP Flags: ACK)
Enter fullscreen mode Exit fullscreen mode

Apparently, one can issue a --protocol http filter, but that did not work for some reason - which I must investigate. The CLI is a very powerful tool, but hubble also comes with a UI. Which can be started with cilium hubble ui.

To be able to inspect at Layer 7, we need to enable this explicitly such that traffic is routed through a L7 proxy see Layer 7 Protocol Visibility for more info. In our setup we will simply annotate our deployments with Egress/80/TCP/HTTP. Of course, now by doing this one can expect some security issues related to packet inspection on the L7 level. Please read through the Security Implications as there are options for redaction.


Hubble UI

Whilst I personally prefer the terminal experience, the hubble UI does a nice job in showing traffic, and with more complex communication paths it’s easy to get a good overview.

Notes

There are some caveats with Hubble, as with any inspection tool it has overhead. Hubble with no exception. The problem with Hubble is that one cannot be selective over the information it collects, it’s all or nothing. WIth the aid of eBPF this isn’t too wild, but it needs to be a consideration.

A great blog post from Isovalent Re-introducing Hubble

Kubeshark

Project: https://kubeshark.co/

Kubeshark is a tool designed to amplify the capabilities of network analysis within Kubernetes clusters. Using advanced packet capture technologies like eBPF (Extended Berkeley Packet Filter) and PF_RING (type of network socket that dramatically improves packet capture speed), combined with custom kernel modules, Kubeshark captures cluster-wide L4 traffic (TCP & UDP) and efficiently stores it in distributed PCAP (Packet Capture) storage. The tool stands out in its ability to dissect a multitude of application layer protocols, from HTTP variants and AMQP (Advanced Message Queuing Protocol) to Redis and DNS, not leaving behind the recognition of gRPC over HTTP/2 and GraphQL over both HTTP/1.1 and HTTP/2.

One of its notable features is its adept use of extended BPF (eBPF) which traces function calls in both kernel and user spaces. What makes Kubeshark particularly compelling is its capacity to sniff encrypted traffic (like TLS) within the cluster using eBPF, without the need for decryption. This is achieved by ingeniously hooking into specific functions within the OpenSSL library and Go’s crypto/tls package.

In addition (and this is very nice!) Kubeshark is service mesh-aware, being able to identify popular solutions like Istio and Linkerd, particularly those harnessing the power of Envoy Proxy.

However, Kubeshark isn’t just about observation; it’s also about action. With the amalgamation of scripting languages, hooks, helpers, and jobs, it can detect anomalous network behaviors and swiftly trigger actions through a range of integrations, from Slack notifications to data storage solutions like AWS S3 and Elasticsearch. This blend of deep analysis and proactive response solidifies Kubeshark’s position as an essential tool for DevOps and networking professionals navigating the Kubernetes landscape.

Kubeshark gives the opportunity of two deployment scenarios:

On-demand traffic investigation Facilitated via a CLI tool accessible to anyone with kubectl permissions. or

Long-lived deployments Made possible using a helm chart to cater to varied needs like collaborative debugging, network monitoring, telemetry, and forensics.

Notably, Kubeshark operates seamlessly without prerequisites such as CNI, service-mesh, or coding. It doesn’t demand the use of proxies, sidecars, or any architectural modifications.

Kubeshark boasts four main software components:

The CLI , Written in Go, this binary distribution facilitates on-the-fly, lightweight use of Kubeshark without leaving any permanent traces. Through direct communication with the Kubernetes API, it positions containers optimally for effective traffic analysis.

Kubeshark dashboard which is constructed as a React application, this dashboard interfaces with the Hub using WebSockets, showcasing the captured traffic in a real-time feed.

Kubeshark Hub which is positioned as a gateway to the Workers, the Hub hosts an HTTP server. Its primary roles involve accepting and managing WebSocket connections, receiving dissected traffic from the workers, and streaming these results back to the user.

Finally, a worker : Deployed as a DaemonSet, ensuring that each node in the cluster is under Kubeshark’s purview. The worker is the heart of the network analysis, capturing packets from all interfaces, reassembling TCP streams, and storing relevant data as PCAP files. It communicates the collected traffic to the Hub via WebSocket connections.

Additionally, Kubeshark uses a distributed PCAP-based storage system, wherein each Worker stores the captured L4 streams on the node’s root file system. Furthermore, its design emphasizes low network overhead, transmitting only essential traffic fragments upon request, ensuring efficient network operations.

Setup & Installation

Installation of Kubeshark is very simple, although you should be aware of firing off shell scripts directly from the web — it’s super simple to get started, one could also do this in an ubuntu container for example. Installation documentation can be found here

➜ ~ sh <(curl -Ls https://kubeshark.co/install)

🦈 Started to download Kubeshark
########################################################################################################################################################### 100.0%

⬇️ Kubeshark is downloaded into /Users/rgo/kubeshark
Do you want to install system-wide? Requires sudo 😇 (y/N)? N
✅ You can use the ./kubeshark command now.

Please give us a star 🌟 on https://github.com/kubeshark/kubeshark if you ❤️ Kubeshark!
Enter fullscreen mode Exit fullscreen mode

By running kubeshark tap it will by default capture all traffic, and throw us into a web UI. Instantly we start to see what the traffic flow looks like within our cluster. One can also filter using kubeshark tap "nginx*"

➜ ~ kubeshark tap
2023-09-12T19:01:07+02:00 INF tapRunner.go:50 > Using Docker: registry=docker.io/kubeshark/ tag=latest
2023-09-12T19:01:07+02:00 INF versionCheck.go:23 > Checking for a newer version...
2023-09-12T19:01:07+02:00 WRN tapRunner.go:61 > Storage limit cannot be modified while persistentstorage is set to false!
2023-09-12T19:01:07+02:00 INF tapRunner.go:66 > Kubeshark will store the traffic up to a limit (per node). Oldest TCP/UDP streams will be removed once the limit is reached. limit=200Mi
2023-09-12T19:01:07+02:00 INF common.go:69 > Using kubeconfig: path=store-k3s.yaml
2023-09-12T19:01:07+02:00 INF tapRunner.go:83 > Telemetry enabled=true notice="Telemetry can be disabled by setting the flag: --telemetry-enabled=false"
2023-09-12T19:01:07+02:00 INF tapRunner.go:85 > Targeting pods in: namespaces=[""]
2023-09-12T19:01:07+02:00 INF tapRunner.go:146 > Targeted pod: cilium-operator-75f67d4c9-drcnt
2023-09-12T19:01:07+02:00 INF tapRunner.go:146 > Targeted pod: svclb-cilium-ingress-a7998e18-j5zm9
2023-09-12T19:01:07+02:00 INF tapRunner.go:146 > Targeted pod: local-path-provisioner-845998b5d6-76f7q
2023-09-12T19:01:07+02:00 INF tapRunner.go:146 > Targeted pod: coredns-5f58d47db4-qvlqn
2023-09-12T19:01:07+02:00 INF tapRunner.go:146 > Targeted pod: metrics-server-84c965cd47-qrpq2
2023-09-12T19:01:07+02:00 INF tapRunner.go:146 > Targeted pod: external-secrets-6977b958d4-9xn5m
2023-09-12T19:01:07+02:00 INF tapRunner.go:146 > Targeted pod: cilium-qwf4c
2023-09-12T19:01:07+02:00 INF tapRunner.go:146 > Targeted pod: external-secrets-webhook-5db5d87567-cc68r
2023-09-12T19:01:07+02:00 INF tapRunner.go:146 > Targeted pod: external-secrets-cert-controller-89c86685b-q8gk9
2023-09-12T19:01:07+02:00 INF tapRunner.go:146 > Targeted pod: hubble-relay-5447546447-x29c8
2023-09-12T19:01:07+02:00 INF tapRunner.go:146 > Targeted pod: hubble-ui-694cf76f4c-ssqc8
2023-09-12T19:01:07+02:00 INF tapRunner.go:146 > Targeted pod: nginx-7cd6548575-c2zsq
2023-09-12T19:01:07+02:00 INF tapRunner.go:146 > Targeted pod: netshoot-b78cd67fb-c42lp
2023-09-12T19:01:07+02:00 INF tapRunner.go:146 > Targeted pod: notification-controller-5998f78db8-hwdhf
2023-09-12T19:01:07+02:00 INF tapRunner.go:146 > Targeted pod: kustomize-controller-65b9565bd8-lwnmj
2023-09-12T19:01:07+02:00 INF tapRunner.go:146 > Targeted pod: source-controller-55c549947b-2rbnz
2023-09-12T19:01:07+02:00 INF tapRunner.go:146 > Targeted pod: helm-controller-578d9c7969-5zxxl
2023-09-12T19:01:07+02:00 INF tapRunner.go:146 > Targeted pod: kubeshark-hub
2023-09-12T19:01:07+02:00 INF tapRunner.go:146 > Targeted pod: kubeshark-front
2023-09-12T19:01:07+02:00 INF tapRunner.go:95 > Waiting for the creation of Kubeshark resources...
2023-09-12T19:01:08+02:00 INF helm.go:128 > Downloading Helm chart: repo-path=Caches/helm/repository url=https://github.com/kubeshark/kubeshark.github.io/releases/download/kubeshark-50.2/kubeshark-50.2.tgz
2023-09-12T19:01:09+02:00 INF helm.go:147 > Installing using Helm: kube-version=">= 1.16.0-0" release=kubeshark source=["https://github.com/kubeshark/kubeshark/tree/master/helm-chart"] version=50.2
2023-09-12T19:01:10+02:00 INF helm.go:61 > creating 11 resource(s)
2023-09-12T19:01:11+02:00 INF tapRunner.go:106 > Installed the Helm release: kubeshark
2023-09-12T19:01:11+02:00 INF tapRunner.go:177 > Added: pod=kubeshark-hub
2023-09-12T19:01:11+02:00 INF tapRunner.go:268 > Added: pod=kubeshark-front
2023-09-12T19:01:13+02:00 INF proxy.go:31 > Starting proxy... namespace=default proxy-host=127.0.0.1 service=kubeshark-hub src-port=8898
2023-09-12T19:01:15+02:00 INF proxy.go:31 > Starting proxy... namespace=default proxy-host=127.0.0.1 service=kubeshark-front src-port=8899
2023-09-12T19:01:15+02:00 INF tapRunner.go:476 > Kubeshark is available at: url=http://127.0.0.1:8899
2023-09-12T19:01:18+02:00 INF tapRunner.go:450 > Hub is available at: url=http://127.0.0.1:8898
Enter fullscreen mode Exit fullscreen mode

Navigate to the service map to visualise this:


Kubeshark UI


Kubeshark Service Map

If you are familiar with Wireshark syntax, we can filter out specifics. Example, to exclude all KubeDNS requests:

dst.name != "kube-dns" and dst.namespace != "kube-system"

Tips

Kubeshark can sniff the encrypted traffic (TLS) in your cluster using eBPF without actually doing decryption. It hooks into entry and exit points in certain functions inside the OpenSSL library and Go’s crypto/tls package.

See their documentation for more details!

In addition there is a nice overview of performance in relation to the technology used in packet capturing: https://docs.kubeshark.co/en/performance with a comparioson over AF_PACKET,AF_XDP and PF_RING. Each technology serves different purposes!

AF_PACKET provides raw access to network packets at the link-layer level and is useful for packet capturing and analysis.

AF_XDP allows high-performance packet processing in the kernel space using eBPF, suitable for tasks like packet filtering and forwarding.

PF_RING is a framework that offers efficient packet capture and processing, often with kernel bypass techniques, for applications requiring high-speed network analysis.

Netshoot

Project: https://github.com/nicolaka/netshoot

Netshoot is personally one of my favourite “toolbox” type of setups. It literally has everthing one will ever need i troubleshooting.

Netshoot is a highly versatile Docker container tailored for network troubleshooting. Designed for DevOps engineers, network administrators, and anyone who needs to diagnose network issues within container environments, Netshoot packages a suite of powerful tools under one umbrella.

Key Features:

Netshoot comes with a range of tools like netstat, ifconfig, ping, traceroute, and many others. This makes it a one-stop-shop for most network diagnosis needs. However, Even with its vast toolkit, Netshoot is optimized to be lightweight, ensuring minimal overhead when deployed in container environments.

Being a Docker container, integrating Netshoot into any containerized infrastructure is seamless. This means engineers can swiftly deploy it, diagnose, and then dispose of it without affecting existing workloads.

Given the following usage scenario — Suppose you’re experiencing packet losses between microservices in a Kubernetes cluster. Instead of individually installing network tools on every node or service, you can deploy Netshoot and immediately access its toolkit to diagnose the issue.

For anyone working in container environments, especially DevOps engineers, Netshoot is an indispensable tool. It simplifies the often complex task of network troubleshooting, making it easier to maintain and optimize infrastructures.

Tips

Netshoot (community) now includes a kubectl netshoot

A kubectl plugin to easily spin up and access a netshoot container. netshoot is a network troubleshooting Swiss-army knife which allows you to perform Kubernetes troubleshooting without installing any new packages in your containers or cluster nodes.

Conclusion

In the realm of network observability and troubleshooting within Kubernetes environments, several tools have carved out distinct niches. Cilium’s Hubble, tapping into the power of eBPF, offers an impressive real-time observability platform, showcasing the intricacies of network traffic in cloud-native deployments. Its insights are amplified by the foundational explorations on eBPF by experts like Sebastian Wicki.

Kubeshark, on the other hand, elevates network analysis by incorporating advanced packet capture technologies. Notably, its capacity to trace encrypted traffic without decryption is groundbreaking. With service mesh-awareness and versatile deployment options, Kubeshark stands as a robust solution for DevOps and network professionals.

Yet, amidst these advanced tools, Netshoot stands out with its simplistic yet highly effective approach. As a personal favorite, Netshoot serves as a comprehensive “toolbox” for network troubleshooting in container environments. It bundles essential network tools, ensuring that DevOps engineers and network administrators have everything at their fingertips for swift diagnosis. Lightweight and seamlessly integrated into containerized systems, Netshoot is an invaluable asset for those looking to optimize and maintain their infrastructures efficiently.

Keep in mind, it’s all about selecting the right tool for the job!. With the aforementioned tools at our disposal, diagnosing issues in our deployments becomes a straightforward affair.

Until next time — happy debugging!


Top comments (0)