DEV Community

Cover image for Monitoring/logging your K8S NodeJS applications with elasticsearch
Thijs Dieltjens
Thijs Dieltjens

Posted on

Monitoring/logging your K8S NodeJS applications with elasticsearch

A quick guide on how to set up everything you need to start logging and monitoring your NodeJS applications hosted on Kubernetes using elasticsearch

We recently moved our application stack towards Kubernetes. While we immediately benefited from its advantages, we suddenly lacked centralized application level logs for our NodeJS microservices. Previously our Express API was perfectly capable of providing this data on its own. Now it became a lot trickier to aggregate this when multiple pods ran simultaneously.

This triggered a web search for the ideal tool(s) to give us a better understanding on performance and also any errors that would occur. Given we are a startup (www.bullswap.com), we gave preference to a cloud-agnostic, open source solution and that is how we ended up looking at elasticsearch (Elasticsearch, Kibana, APM Server).

With both Kubernetes and Elasticsearch changing so rapidly it was not an easy task to get the right information. That is why we wanted to share our end result below so you do not have to go the same trouble.

Requirements

  • Kubectl access to an up-to-date K8S cluster with enough capacity to handle at least an additional 3GB RAM usage
  • A NodeJS application

What are we setting up?

  • ElasticSearch cluster: https://www.elastic.co/
  • Kibana: provides data visualization on elasticsearch data
  • APM Server: receives data from an APM agent and transforms it into elasticsearch documents
  • Transform your NodeJS services into APM Agents

All code you see should be placed in yaml files and executed using kubectl apply -f {file_name}

Setting up Elasticsearch
To keep everything separated from your regular namespaces we first set up a new namespace.

kind: Namespace
apiVersion: v1
metadata:
  name: kube-logging
---
Enter fullscreen mode Exit fullscreen mode

Next we used a lot of the configuration we found on this tutorial to set up an elasticsearch service consisting of three statefulsets. The setup is described by the following yaml file:

kind: Service
apiVersion: v1
metadata:
  name: elasticsearch
  namespace: kube-logging
  labels:
    app: elasticsearch
spec:
  selector:
    app: elasticsearch
  clusterIP: None
  ports:
    - port: 9200
      name: rest
    - port: 9300
      name: inter-node
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: es-cluster
  namespace: kube-logging
spec:
  serviceName: elasticsearch
  replicas: 3
  selector:
    matchLabels:
      app: elasticsearch
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      containers:
      - name: elasticsearch
        image: elasticsearch:7.14.1
        resources:
            limits:
              cpu: 1000m
            requests:
              cpu: 100m
        ports:
        - containerPort: 9200
          name: rest
          protocol: TCP
        - containerPort: 9300
          name: inter-node
          protocol: TCP
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
        env:
          - name: cluster.name
            value: k8s-logs
          - name: node.name
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: discovery.seed_hosts
            value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
          - name: cluster.initial_master_nodes
            value: "es-cluster-0,es-cluster-1,es-cluster-2"
          - name: ES_JAVA_OPTS
            value: "-Xms512m -Xmx512m"
      initContainers:
      - name: fix-permissions
        image: busybox
        command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
        securityContext:
          privileged: true
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
      - name: increase-vm-max-map
        image: busybox
        command: ["sysctl", "-w", "vm.max_map_count=262144"]
        securityContext:
          privileged: true
      - name: increase-fd-ulimit
        image: busybox
        command: ["sh", "-c", "ulimit -n 65536"]
        securityContext:
          privileged: true
  volumeClaimTemplates:
  - metadata:
      name: data
      labels:
        app: elasticsearch
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 100Gi
Enter fullscreen mode Exit fullscreen mode

This should slowly start deploying three new pods. Once they are all started quickly take a glance at the logs of one them to check everything is fine :).

Setting up Kibana
Now it is time to get Kibana started. Here we need to set up a new service consisting of a single replica deployment of the kibana image.

apiVersion: v1
kind: Service
metadata:
  name: kibana
  namespace: kube-logging
  labels:
    app: kibana
spec:
  ports:
  - port: 5601
  selector:
    app: kibana
--------
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
  namespace: kube-logging
  labels:
    app: kibana
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kibana
  template:
    metadata:
      labels:
        app: kibana
    spec:
      containers:
      - name: kibana
        image: kibana:7.14.1
        resources:
          limits:
            cpu: 1000m
          requests:
            cpu: 100m
        env:
          - name: ELASTICSEARCH_URL
            value: http://elasticsearch:9200
        ports:
        - containerPort: 5601
Enter fullscreen mode Exit fullscreen mode

After applying/creating the yaml file and allowing the pods to get ready you should be able to test whether it is working correctly.
You can do so by looking up the pod name and port forwarding it to localhost.

kubectl port-forward kibana-xyz123456789 5601:5601--namespace=kube-logging

Navigating to localhost:5601 should show you the loading Kibana interface. If Kibana notifies you that there is no data available, you can relax as this is completely normal 😊.

When everything appears to be working, it can be useful to set up a LoadBalancer/Ingress so you can access Kibana from the internet. If you do so however, make sure you put security in place.

Setting up APM Server
I am grateful for this article to set me on the right track. As it is no longer up to date you can find our configuration below.

--------
apiVersion: v1
kind: ConfigMap
metadata:
  name: apm-server-config
  namespace: kube-logging
  labels:
    k8s-app: apm-server
data:
  apm-server.yml: |-
    apm-server:
      host: "0.0.0.0:8200"
      frontend:
        enabled: false
    setup.template.settings:
      index:
        number_of_shards: 1
        codec: best_compression
    setup.dashboards.enabled: false
    setup.kibana:
      host: "http://kibana:5601"
    output.elasticsearch:
      hosts: ['http://elasticsearch:9200']
      username: elastic
      password: elastic
--------
apiVersion: v1
kind: Service
metadata:
  name: apm-server
  namespace: kube-logging
  labels:
    app: apm-server
spec:
  ports:
  - port: 8200
    targetPort: 8200
    name: http
    nodePort: 31000
  selector:
    app: apm-server
  type: NodePort
--------
apiVersion: apps/v1
kind: Deployment
metadata:
  name: apm-server
  namespace: kube-logging
spec:
  # this replicas value is default
  # modify it according to your case
  replicas: 1
  selector:
    matchLabels:
      app: apm-server
  template:
    metadata:
      labels:
        app: apm-server
    spec:
      containers:
      - name: apm-server
        image: docker.elastic.co/apm/apm-server:7.15.0
        ports:
        - containerPort: 8200
          name: apm-port
        volumeMounts:
        - name: apm-server-config
          mountPath: /usr/share/apm-server/apm-server.yml
          readOnly: true
          subPath: apm-server.yml
      volumes:
      - name: apm-server-config
        configMap:
          name: apm-server-config
Enter fullscreen mode Exit fullscreen mode

After applying/creating the yaml file and allowing the pods to get ready you should be able to test whether it is correctly connecting to elasticsearch by looking at the logs.

Final step: sending data
Below lines should be the first require to load in your NodeJS application(s). When adding this to an express server you immediately start receiving logs about how transactions (http requests) are handled. You can find useful information such as

  • Which external services such as databases or APIs cause delays in your applications.
  • Which API calls are slow
  • Where and how often errors occur
  • NodeJS CPU usage
  • ...
apm = require('elastic-apm-node').start({
    // Override service name from package.json
    // Allowed characters: a-z, A-Z, 0-9, -, _, and space
    serviceName: '{CHANGE THIS TO YOUR APPLICATION/SERVICE NAME}',
    // Set custom APM Server URL (default: http://localhost:8200)
    serverUrl: 'http://apm-server.kube-logging.svc.cluster.local:8200'
    });
Enter fullscreen mode Exit fullscreen mode

Send a few requests to your server and you should be seeing a service appear in Kibana. (Observability > APM)
By clicking on it you should be able to see a nice overview of transactions, throughput and latency. If for any reason this is not happening I suggest you take a look at:

  • NodeJS logs (connection issues to APM will be logged here)
  • APM logs (issues connecting to elasticsearch will be here)

In the case of an express server you often will already catch a lot of the errors and send for example 500 errors. For that reason elasticsearch will not treat it as an error. While you are able to distinguish based on the HTTP status codes, it can make sense to add the following line wherever you deal with unsuccesful events. This way it will be treated as an error.

apm.captureError(error);

Definitely explore the possibilities of Elasticsearch/Kibana/APM Server as it is capable of doing a lot more!

We hope this article is useful for some. Our goal was to save you the time we spent on figuring it out for https://www.bullswap.com.

Top comments (0)