DEV Community

Vidyasagar SC Machupalli
Vidyasagar SC Machupalli

Posted on • Originally published at Medium on

Knative monitoring with Grafana, Zipkin, Weavescope & other plugins..

In this post, you will see the telemetry side of Knative and Istio for a nodejs app named Knative-node-app published on IBM Cloud in the previous post Install Knative with Istio and deploy an app on IBM Cloud.

As per the Monitoring, Logging and Tracing Installation documentation of Knative,

Knative Serving offers two different monitoring setups: Elasticsearch, Kibana, Prometheus and Grafana or Stackdriver, Prometheus and Grafana You can install only one of these two setups and side-by-side installation of these two are not supported.

We will stick to Elasticsearch, Kibana, Prometheus and Grafana stack and will additionally use Weavescope for in-depth visualization of containers, pods etc.,

If you have installed serving component while setting up Knative, you should have the monitoring component is already installed. To confirm Knative serving component installation, run the below command

$ kubectl describe deploy controller --namespace knative-serving
Enter fullscreen mode Exit fullscreen mode

Knative Serving details

To check the installation of monitoring component, run the below command

kubectl get pods --namespace knative-monitoring
Enter fullscreen mode Exit fullscreen mode

If you don’t see anything running, follow the steps here to setup.

Grafana

You can access metrics through the Grafana UI. Grafana is the visualization tool for Prometheus.

To open Grafana, enter the following command:

kubectl port-forward --namespace monitoring $(kubectl get pods --namespace monitoring --selector=app=grafana --output=jsonpath="{.items..metadata.name}") 3000
Enter fullscreen mode Exit fullscreen mode

Note: This starts a local proxy of Grafana on port 3000. For security reasons, the Grafana UI is exposed only within the cluster.

Navigate to the Grafana UI at http://localhost:3000.

Grafana UI — knative-node-app

You can also check the metrics for Knative Serving- scaling, deployments, pods etc.,

Knative Serving — scaling metrics

The following dashboards are pre-installed with Knative Serving:

  • Revision HTTP Requests: HTTP request count, latency, and size metrics per revision and per configuration
  • Nodes: CPU, memory, network, and disk metrics at node level
  • Pods: CPU, memory, and network metrics at pod level
  • Deployment: CPU, memory, and network metrics aggregated at deployment level
  • Istio, Mixer and Pilot: Detailed Istio mesh, Mixer, and Pilot metrics
  • Kubernetes: Dashboards giving insights into cluster health, deployments, and capacity usage

Zipkin

In order to access request traces, you use the Zipkin visualization tool.

To open the Zipkin UI, enter the following command:

kubectl proxy
Enter fullscreen mode Exit fullscreen mode

This command starts a local proxy of Zipkin on port 8001. For security reasons, the Zipkin UI is exposed only within the cluster.

Navigate to the Zipkin UI and Click “Find Traces” to see the latest traces. You can search for a trace ID or look at traces of a specific application. Click on a trace to see a detailed view of a specific call.

Zipkin Traces — Knative-node-app-0001

Weavescope

While obtaining and visualising uniform metrics, logs, traces across microservices using Istio , I fell in love with Weavescope. So thought of playing with it to understand the processes, containers, hosts and others involved in my application.

Scope is deployed onto a Kubernetes cluster with the command

kubectl apply -f “[https://cloud.weave.works/k8s/scope.yaml?k8s-version=$(kubectl](https://cloud.weave.works/k8s/scope.yaml?k8s-version=%24(kubectl) version | base64 | tr -d ‘\n’)”
Enter fullscreen mode Exit fullscreen mode

To open Weavescope, run the command and open http://localhost:4040/

kubectl port-forward -n weave “$(kubectl get -n weave pod — selector=weave-scope-component=app -o jsonpath=’{.items..metadata.name}’)” 4040
Enter fullscreen mode Exit fullscreen mode

Kibana + ElasticSearch

I tried to visualise the logs using Kibana UI (the visualization tool for Elasticsearch), but struck with the following error while configuring an index pattern — “Unable to fetch mapping. Do you have indices matching the pattern?”

As there will be a revise of “logging and monitoring” related topics as per this issueon knative GitHub repo. I will be revisiting logs in future for sure.

Update: Found a workaround for this issue following the answers in this stackoverflow question.

Here are the steps,

Run the below command to apply a patch to fix the fluentd-ds pods not showing issue

kubectl apply -f [https://raw.githubusercontent.com/gevou/knative-blueprint/master/knative-serving-release-0.2.2-patched.yaml](https://raw.githubusercontent.com/gevou/knative-blueprint/master/knative-serving-release-0.2.2-patched.yaml)
Enter fullscreen mode Exit fullscreen mode

Verify that each of your nodes have the beta.kubernetes.io/fluentd-ds-ready=true label:

kubectl get nodes --selector beta.kubernetes.io/fluentd-ds-ready=true
Enter fullscreen mode Exit fullscreen mode

If you receive the No Resources Found response:

  • Run the following command to ensure that the Fluentd DaemonSet runs on all your nodes:
kubectl label nodes — all beta.kubernetes.io/fluentd-ds-ready=”true”
Enter fullscreen mode Exit fullscreen mode
  • Run the following command to ensure that the fluentd-ds daemonset is ready on at least one node:
kubectl get daemonset fluentd-ds --namespace knative-monitoring
Enter fullscreen mode Exit fullscreen mode

Wait for a while and run this command

kubectl proxy
Enter fullscreen mode Exit fullscreen mode
  • Navigate to the Kibana UI. It might take a couple of minutes for the proxy to work.
  • Within the “Configure an index pattern” page, enter logstash-* to Index pattern and select @timestamp from Time Filter field name and click on Create button.
  • To create the second index, select Create Index Pattern button on top left of the page. Enter zipkin* to Index pattern and select timestamp_millis from Time Filter field name and click on Create button.

If you still see the issue, try clicking Dev Tools on the Kibana page andrun this command

GET \_cat/indices?v
Enter fullscreen mode Exit fullscreen mode

Further reading


Top comments (0)