In last weeks blog, Monitoring AKS With Prometheus and Grafana, you learned how to monitor Azure Kubernetes Service (AKS) with Prometheus and Grafana, along with the theory around why you’d want to implement monitoring and observability.
In this blog post, you’ll learn how to implement Grafana and Prometheus again, but this time in Elastic Kubernetes Service (EKS)
There are several options for setting up an EKS cluster, but the two primary options are typically:
- Via an Infrastructure-as-Code tool like Terraform
- Via the UI (manual, not repeatable, and not recommended).
If you want to use Terraform, you can check out the open-source code that I wrote to get your EKS cluster up and running here: https://github.com/AdminTurnedDevOps/Kubernetes-Quickstart-Environments/tree/main/aws/eks
If you choose to go with the UI/portal method, log into AWS and search for the EKS service.
Just like any other Kubernetes cluster, the
/metrics endpoint needs to be available.
Unlike AKS, EKS doesn’t expose the Metrics server (Pod) by default. Instead, you have to configure it.
First, connect to the EKS cluster:
aws eks --region region update-kubeconfig --name cluster_name
Next, run the configuration to deploy the Metrics server
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
Check to see if the Metrics Pod is up and running.
kubectl get pods -n kube-system
/metrics endpoint is available, you’ll see an output similar to the text below with Metrics Pods running.
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system aws-node-dzklw 1/1 Running 0 5m18s kube-system coredns-d5b9bfc4-p4ndr 1/1 Running 0 8m57s kube-system coredns-d5b9bfc4-rtpgx 1/1 Running 0 8m57s kube-system kube-proxy-v67z5 1/1 Running 0 5m18s kube-system metrics-server-847dcc659d-9d2l5 1/1 Running 0 32s
Once the Metrics server (Pod) is available, you can deploy Prometheus.
For the purposes of this blog post, you can use the
prometheus-community Helm chart. There are several ways to deploy Prometheus and Grafana, and this is one of the most popular ways.
First, add the Helm repo.
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
Next, deploy the Helm Chart with the specific storage requirements needed for any volumes that may run in your EKS cluster. In this case, for example, the
--set flag is using
gp2 for the storage class.
helm install prometheus \ prometheus-community/kube-prometheus-stack \ --namespace monitoring \ --create-namespace \ **--set alertmanager.persistentVolume.storageClass="gp2",server.persistentVolume.storageClass="gp2"**
You’ll see an output similar to the screenshot below.
Once Prometheus and Grafana are deployed from the steps above, let’s confirm that everything is up and running as expected.
First, run the following command:
kubectl get all -n monitoring
You should see an output similar to the screenshot below, which confirms all Prometheus and Grafana resources are actively running.
Next, confirm that you can reach Prometheus via Kubernetes port forwarding so you can see if the
/metrics endpoint is getting consumed for Kubernetes Metrics.
kubectl port-forward svc/prometheus-kube-prometheus-prometheus -n monitoring 4001:9090
Open up a web browser and go to the following:
Go to Status —> Targets.
If you scroll down a bit on the Targets page, you can confirm that the
/metrics endpoint from Kubernetes is getting consumed.
Congrats! You have officially set up Prometheus and Grafana on EKS.