Using AMP (Amazon Managed Prometheus) can be really useful because we don't have to manage a lot of things with Prometheus, especially when we are looking for a scalable platform.
But currently, there's no pretty interface to be able to check the data, the rules, alarms... So here comes Prom-UI!
Prom-UI
Prom-UI is the web interface from Prometheus but extracted to be able to run in a standalone mode and with the configuration to let you define from where you want to retrieve the data!
How-to use it ?
First, Prom-UI is not able to connect itself to AWS, so we will need a little bit of setup in AWS and use another tool to make it works.
1 - Setup in AWS
(In this case, you already have AMP setup and have workspaces defined.)
In AWS, we will need to define an IAM role to let access to the data in AMP.
- On the IAM console, choose Roles in the navigation pane
- Choose Create role and choose Custom trust policy
- Replace the custom trust policy with the following one. Update fields
Account Number
,region
,OpenID conect ID
,namespace
&service account name
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<Account Number>:oidc-provider/oidc.eks.<region>.amazonaws.com/id/<OpenID Connect ID>"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.<region>.amazonaws.com/id/<OpenID Connect ID>:sub": "system:serviceaccount:<namespace>:<service account name>"
}
}
}
]
}
In this case, the Prom-UI app will be deployed in an EKS cluster. The OpenID Connect ID
can be retrieve in the field OpenID Connect provider URL
in the AWS EKS console.
- Choose Next
- Click on Create Policy, it will open a new tab
- Paste the following policy in the JSON tab
{
"Statement": [
{
"Action": [
"aps:ListRules",
"aps:ListAlertManagerAlerts",
"aps:ListTagsForResource",
"aps:GetLabels",
"aps:ListRuleGroupsNamespaces",
"aps:GetAlertManagerStatus",
"aps:GetAlertManagerSilence",
"aps:ListAlertManagerAlertGroups",
"aps:DescribeAlertManagerDefinition",
"aps:QueryMetrics",
"aps:DescribeRuleGroupsNamespace",
"aps:GetMetricMetadata",
"aps:DescribeWorkspace",
"aps:ListAlerts",
"aps:DescribeLoggingConfiguration",
"aps:ListAlertManagerSilences",
"aps:ListWorkspaces",
"aps:GetSeries",
"aps:ListAlertManagerReceivers"
],
"Effect": "Allow",
"Resource": "*",
"Sid": "VisualEditor0"
}
],
"Version": "2012-10-17"
}
- Click on Next, and Review
- Define a name for the policy, and click on Create Policy
- Go back to the tab where you were creating the rules
- Refresh the list of policies, and select the one you just created
- Then click on Next, name the role and click on Create role
- Retrieve the newly created role and get its ARN
2 - Setup in Kubernetes
Now everything is prepared in AWS, we can setup the stuff in Kubernetes to let everything works!
ServiceAccount
Create a service account using the following template, where you have to replace arn
with the ARN of the created role, service account name
& namespace
with the values you used while creating the role.
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: <arn>
name: <service account name>
namespace: <namespace>
Deployment
For the deployment you can use the following deployment configuration where you have to replace :
deployment name
namespace
-
Workspace UUID
- UUID of the workspace in AMP -
service account name
apiVersion: apps/v1
kind: Deployment
metadata:
name: <deployment name>
namespace: <namespace>
spec:
replicas: 1
selector:
matchLabels:
app: prom-ui
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: prom-ui
spec:
containers:
- env:
- name: PROMETHEUS_SRC_API
value: http://localhost:8005/workspaces/ws-<workspace UUID>
image: adaendraa/prom-ui:1.0.2
livenessProbe:
failureThreshold: 6
httpGet:
path: /health-check
port: http
scheme: HTTP
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 3
name: rules-ui
ports:
- containerPort: 3000
name: http
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /health-check
port: http
scheme: HTTP
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 3
- args:
- --name
- aps
- --region
- <region>
- --host
- aps-workspaces.us-east-1.amazonaws.com
- --port
- :8005
image: public.ecr.aws/aws-observability/aws-sigv4-proxy:1.0
imagePullPolicy: IfNotPresent
name: aws
ports:
- containerPort: 8005
name: aws-sigv4-proxy
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
serviceAccount: <service account name>
serviceAccountName: <service account name>
The deployment is pretty simple, we have a deployment with 2 containers :
- aws : An AWS sigV4 container which will help us to communicate with AWS
-
prom-ui : The Web UI app which will point on
http://localhost:8005/workspaces/ws-<workspace UUID>
to retrieve data from AMP through the AWS Sig V4 container.
Then you have to expose this deployment to be able to use it properly (I won't talk about it because there are a lot of ways to do it depending your context).
To check if everything is right, you can use kubectl port-forward
.
I hope it will help you! 🍺
Top comments (0)