DEV Community

Cover image for Kubernetes Patterns: The Adapter Pattern
ylcnky
ylcnky

Posted on

Kubernetes Patterns: The Adapter Pattern

Kubernetes Patterns: The Adapter Pattern

All containerized applications are able to communicate with each other through a well-defined protocol, typically HTTP. Each application has a set of endpoints that expect an HTTP verb to do a specific action. It is the responsibility of the client determine how to communicate with the server application. However, you could have a service that expects a specific response from any application. The most common example of this service type is Prometheus. Prometheus is a very well-known monitoring application that checks not only if an application is working, but also if it is working as expected or perfectly.
image

Prometheus works by querying and endpoint exposed by the target application. The endpoint must return the diagnostic data in a format that Prometheus expects. A possible solution is to configure each application to output its health data in a Prometheus-friendly way. However, you may need to switch your monitoring solution to another tool that expects another format. Changing the application code each time you need a health-status format is largely ineficient. Following the Adapter Pattern, we can have a sidecar container in the same Pod as the app's container. The only purpose of the sidecar (the adapter container) is to "translate" the output from the application's endpoint to a format that Prometheus (or the client tool) accepts and understands.

Scenario: Using an Adapter Container with Nginx

Nginx has an endpoint that is used for querying the web server's status. In this scenario, we add an adapter container to transform this endpoint's output to the required format for Prometheus. First, we need to enable this endpoint on Nginx. To do this, we need to make a change to the default.conf file. The following configMap contains the required default.conf file

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-conf
data:
  default.conf: |
    server {
      listen       80;
      server_name  localhost;
      location / {
          root   /usr/share/nginx/html;
          index  index.html index.htm;
      }
      error_page   500 502 503 504  /50x.html;
      location = /50x.html {
          root   /usr/share/nginx/html;
      }
      location /nginx_status {
        stub_status;
        allow 127.0.0.1;  #only allow requests from localhost
        deny all;   #deny all other hosts
      }
    }
Enter fullscreen mode Exit fullscreen mode

This is the default default.conf file that ships the nginx Docker image. We define an endpoint /nginx_status that makes use of the stub_status module to display nginx's diagnostic information. Next, let's create the Nginx Pod and the adapter container:

apiVersion: v1
kind: Pod
metadata:
  name: webserver
spec:
  volumes:
  - name: nginx-conf
    configMap:
      name: nginx-conf
      items:
      - key: default.conf
        path: default.conf
  containers:
  - name: webserver
    image: nginx
    ports:
    - containerPort: 80
    volumeMounts:
    - mountPath: /etc/nginx/conf.d
      name: nginx-conf
      readOnly: true
  - name: adapter
    image: nginx/nginx-prometheus-exporter:0.4.2
    args: ["-nginx.scrape-uri","http://localhost/nginx_status"]
    ports:
    - containerPort: 9113
Enter fullscreen mode Exit fullscreen mode

The Pod definition contains two containers; the nginx container, which acts as the application container, and the adapter container. The adapter container uses the nginx/nginx-prometheus-exporter which does the magic of transforming the metrics that Nginx exposes on /nginx_status following the Prometheus format. If you are interested in seeing the difference between both metrics, do the following

$ kubectl exec -it webserver bash
$ root@webserver:/# apt update && apt install curl -y
Defaulting container name to webserver.
Use 'kubectl describe pod/webserver -n default' to see all of the containers in this pod.
$ root@webserver:/# curl localhost/nginx_status
Active connections: 1
server accepts handled requests
 3 3 3
 Reading: 0 Writing: 1 Waiting: 0
 $ root@webserver:/# curl localhost:9313/metrics
 curl: (7) Failed to connect to localhost port 9313: Connection refused
root@webserver:/# curl localhost:9113/metrics
# HELP nginx_connections_accepted Accepted client connections
# TYPE nginx_connections_accepted counter
nginx_connections_accepted 4
# HELP nginx_connections_active Active client connections
# TYPE nginx_connections_active gauge
nginx_connections_active 1
# HELP nginx_connections_handled Handled client connections
# TYPE nginx_connections_handled counter
nginx_connections_handled 4
# HELP nginx_connections_reading Connections where NGINX is reading the request header
# TYPE nginx_connections_reading gauge
nginx_connections_reading 0
# HELP nginx_connections_waiting Idle client connections
# TYPE nginx_connections_waiting gauge
nginx_connections_waiting 0
# HELP nginx_connections_writing Connections where NGINX is writing the response back to the client
# TYPE nginx_connections_writing gauge
nginx_connections_writing 1
# HELP nginx_http_requests_total Total http requests
# TYPE nginx_http_requests_total counter
nginx_http_requests_total 4
# HELP nginx_up Status of the last metric scrape
# TYPE nginx_up gauge
nginx_up 1
# HELP nginxexporter_build_info Exporter build information
# TYPE nginxexporter_build_info gauge
nginxexporter_build_info{gitCommit="f017367",version="0.4.2"} 1
Enter fullscreen mode Exit fullscreen mode

So we logged into the webserver Pod, installed curl to be able to establish HTTP requests, and examined the /nginx_status endpoint and the exporter's one (located under: 9113/metrics). Notice that in both requests, we used localhost as the server address. That's because both containers are running in the same Pod and using the same loopback address.

Top comments (0)