DEV Community

Ankit Bansal
Ankit Bansal

Posted on • Originally published at ankitbansalblog.Medium on

Centralized logging in Oracle Kubernetes Engine with Object Storage

There are various ways to capture logs in kubernetes. One of the simplest mechanism is to have pod level script that can upload logs to destination system. This approach can work when you are trying out kubernetes but as soon as you decide to use kubernetes in production and have multiple applications to deploy, you see a need of having a centralized logging mechanism which is independent of pod lifecycle.

We had the requirement to set up logging for our kubernetes cluster. Cluster is built on Oracle Kubernetes Engine (OKE) and we wanted to persist logs in OCI Object Storage. First step was to find out an efficient way of capturing logs. Looking into multiple options, we found Daemonset to be a great option due to following reasons:

  • Kubernetes ensures that every node runs a copy of Pod. Any new node gets added in the cluster, kubernetes automatically ensures to bring up a pod on the node. This can be further customized to only choose nodes based on selection criteria.
  • It avoids changing individual application deployment. If later you want to change your logging mechanism, you only need to change your Daemonset
  • There is no impact on performance of application due to log capturing as it’s running outside pod.

Once finalized, next step was to configure Daemonset to capture logs from OKE and publish them to OCI Object Storage. We have used fluentd before and we decided to go ahead with it. Fluentd already have image for configuring daemonset and upload to s3. Since object storage is compatible with S3 API, we were able to use it with some customizations of fluent.conf.

Setting up cluster role

Fluentd daemonset requires to run in kube-system. First step is to create a new account and providing it required privileges.

Create Service Account:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluentd
  namespace: kube-system
Enter fullscreen mode Exit fullscreen mode

kubectl create -f serviceaccount.yaml

Create Cluster Role:

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: fluentd
  namespace: kube-system
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - namespaces
  verbs:
  - get
  - list
  - watch
Enter fullscreen mode Exit fullscreen mode

kubectl create -f clusterrole.yaml

Create binding for cluster role with account:

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: fluentd
roleRef:
  kind: ClusterRole
  name: fluentd
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: fluentd
  namespace: kube-system
Enter fullscreen mode Exit fullscreen mode

kubectl create -f clusterrolebinding.yaml

Create Daemonset

Next, we create config map to provide custom fluent configuration:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: fluent-config
  namespace: kube-system
data:
  fluent.conf: |
    @include "#{ENV['FLUENTD_SYSTEMD_CONF'] || '../systemd'}.conf"
    @include "#{ENV['FLUENTD_PROMETHEUS_CONF'] || '../prometheus'}.conf"
    @include ../kubernetes.conf
    @include conf.d/*.conf

    <match **>
      @type s3
      @id out_s3
      @log_level info
      s3_bucket "#{ENV['S3_BUCKET_NAME']}"
      s3_endpoint "#{ENV['S3_ENDPOINT']}"
      s3_region "#{ENV['S3_BUCKET_REGION']}"
      s3_object_key_format %{path}%Y/%m/%d/cluster-log-%{index}.%{file_extension}
      <inject>
        time_key time
        tag_key tag
        localtime false
      </inject>
      <buffer>
        @type file
        path /var/log/fluentd-buffers/s3.buffer
        timekey 3600
        timekey_use_utc true
        chunk_limit_size 256m
      </buffer>
    </match>
Enter fullscreen mode Exit fullscreen mode

kubectl create -f configmap.yaml

Now, we can go ahead and create daemonset:

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: kube-system
  labels:
    k8s-app: fluentd-logging
    version: v1
    kubernetes.io/cluster-service: "true"
spec:
  template:
    metadata:
      labels:
        k8s-app: fluentd-logging
        version: v1
        kubernetes.io/cluster-service: "true"
    spec:
      serviceAccount: fluentd
      serviceAccountName: fluentd
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: fluentd
        image: fluent/fluentd-kubernetes-daemonset:v1.3.3-debian-s3-1.3
        env:
          - name: AWS_ACCESS_KEY_ID
            value: "#{OCI_ACCESS_KEY}"
          - name: AWS_SECRET_ACCESS_KEY
            value: "#{OCI_ACCESS_SECRET}"
          - name: S3_BUCKET_NAME
            value: "#{BUCKET_NAME}"
          - name: S3_BUCKET_REGION
            value: "#{OCI_REGION}"
          - name: S3_ENDPOINT
            value: "#{OBJECT_STORAGE_END_POINT}"
          - name: FLUENT_UID
            value: "0"
          - name: FLUENTD_CONF
            value: "override/fluent.conf"
          - name: FLUENTD_SYSTEMD_CONF
            value: "disable"
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log/
        - name: u01data
          mountPath: /u01/data/docker/containers/
          readOnly: true
        - name: fluentconfig
          mountPath: /fluentd/etc/override/
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log/
      - name: u01data
        hostPath:
          path: /u01/data/docker/containers/
      - name: fluentconfig
        configMap:
          name: fluent-config
Enter fullscreen mode Exit fullscreen mode

Couple of things to note:

  • We provide a custom fluent.conf via config map and mount it in daemonset. This is required to provide explicit s3_endpoint since image by default doesn’t have a way to provide custom s3_endpoint
  • Following are the env variables we need to configure. S3_BUCKET_REGION is the oci region e.g. us-ashburn-1. S3_ENDPOINT is the object storage endpoint e.g. https://#{tenantname}.compat.objectstorage.us-ashburn-1.oraclecloud.com. AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are the customer secret keys for your user. If not already present, refer to doc to generate. S3_BUCKET_NAME is the object storage bucket to store logs.

Let’s go ahead and create daemonset:

kubectl create -f daemonset.yaml

Once configured, you should be able to see logs in object storage. If bucket doesn’t exist, it will create it.

Top comments (0)