DEV Community

Martin Pham
Martin Pham

Posted on • Edited on

Having fun with Kubernetes - Final Chapter: Play time

Finally, you have your very first k8s cluster with 2 nodes up and running well. Now let’s play with it!

(Optional) Install Kubernetes Dashboard

Kubernetes Dashboard is a very cool web-based UI for managing your k8s cluster.

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta6/aio/deploy/recommended.yaml
Enter fullscreen mode Exit fullscreen mode

To access this dashboard, you might need to use proxy

$ kubectl proxy
Enter fullscreen mode Exit fullscreen mode

After that, you can access it by opening
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/overview?namespace=default

To login, you might need to create an user. However, for this Lab, we can just grab the namespace-controller token to login

$ kubectl -n kube-system describe secret
Enter fullscreen mode Exit fullscreen mode

You will find the namespace-controller token from the output. Copy & Paste it into the dashboard authentication screen, and you will be authenticated.


Putting our Lego blocks

Build and push application image into Docker registry
Go back to our repository, now we’d like to build it again, with version tag. Then push it into our Docker registry. So after it, Kubernetes can pull the image and deploy it.

$ docker build . -t martinpham/kphp:v1
$ docker tag martinpham/kphp:v1 192.168.1.33:5000/martinpham/kphp:v1
$ docker push 192.168.1.33:5000/martinpham/kphp:v1
Enter fullscreen mode Exit fullscreen mode

Here we are building our Docker image, with name martinpham/kphp tag v1 (You can use whatever name you want, don’t worry!). Then push it under name 192.168.1.33:5000/martinpham/kphp:v1

Deploy pods
Create a Configmap configuration file: config.yml

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-config
data:
  nginx.conf: |
    events {
    }
    http {
      server {
        listen 80 default_server;
        listen [::]:80 default_server;

        root /var/www/html;
        index index.html index.htm index.php;
        server_name _;
        location / {
          try_files $uri $uri/ =404;
        }
        location ~ \.php$ {
          include fastcgi_params;
          fastcgi_param REQUEST_METHOD $request_method;
          fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
          fastcgi_pass 127.0.0.1:9000;
        }
      }
    }
Enter fullscreen mode Exit fullscreen mode

Nothing to scare!, we’re making a shared config across our k8s cluster. You can apply it now:

$ kubectl apply -f config.yml
Enter fullscreen mode Exit fullscreen mode

Now create a Deployment configuration file: deployment.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment
  labels:
    name: deployment
spec:
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 2
  selector:
    matchLabels:
      name: templated-pod
  template:
    metadata:
      name: deployment-template
      labels:
        name: templated-pod
    spec:
      volumes:
        - name: app-files
          emptyDir: {}

        - name: nginx-config-volume
          configMap:
            name: nginx-config

      containers:
        - image: 192.168.1.33:5000/martinpham/kphp:v1
          name: app
          volumeMounts:
            - name: app-files
              mountPath: /var/www/html
          lifecycle:
            postStart:
              exec:
                command: ["/bin/sh", "-c", "cp -r /app/. /var/www/html"]
          resources:
            limits:
              cpu: 100m
            requests:
              cpu: 50m


        - image: nginx:latest
          name: nginx
          volumeMounts:
            - name: app-files
              mountPath: /var/www/html
            - name: nginx-config-volume
              mountPath: /etc/nginx/nginx.conf
              subPath: nginx.conf
          resources:
            limits:
              cpu: 100m
            requests:
              cpu: 50m

          ports:
          - containerPort: 80
          readinessProbe:
            httpGet:
              path: /
              port: 80
            initialDelaySeconds: 3
            periodSeconds: 3
            successThreshold: 1
Enter fullscreen mode Exit fullscreen mode

Ok, it’s a bit spaghetti-style, but again, don’t worry:

  • Line 6: We tag this deployment as name = deployment
  • Line 8: We tell k8s to have 3 replicas during deployment.
  • Line 13: During deployment, k8s must keep at least 1 Pod available, so our website will be still online, ensure zero-downtime during deployment.
  • Line 21: We’re tagging our future Pod by name = templated-pod, so k8s can find them
  • Line 24: We create a shared volume between nginx and php, which contains our application runtime
  • Line 27: We map the config created above into another volume
  • Line 32: We create a php-fpm container from our created image, and mount the shared volume into /var/www/html
  • Line 40: We copy all application runtime from the image into the shared volume, so nginx can access it also
  • Line 41: We define limits cpu for this container
  • Line 48: We create an nginx container from Docker hub’s nginx office image.
  • Line 51: We mount the shared volume as we do with the php-fpm container
  • Line 53: We mount the config mapped above into nginx config file path
  • Line 63: We expose port 80 to the Pod
  • Line 64: We define a healthcheck endpoint (HTTP GET /) to tell k8s this Pod is running well or not

Let’s apply it

$ kubectl apply -f deployment.yml
Enter fullscreen mode Exit fullscreen mode

Expose pods with Load Balancer
After applying the Deployment above, k8s will begin to create Pods and exposing them inside k8s network. Now we’d like to expose them via a single Load balancer endpoint.

Create a service-loadbalancer.yml file

apiVersion: v1
kind: Service
metadata:
  name: service-loadbalancer
spec:
  selector:
    name: templated-pod
  ports:
    - port: 80
      targetPort: 80

  type: LoadBalancer
  externalIPs:
    - 192.168.1.33
Enter fullscreen mode Exit fullscreen mode

As told above, all Pods created with the deployment will be tagged as name = templated-pod. We just need to create a Service (Line 2) with type LoadBalancer (Line 12), tell it to balance the traffic to all Pods tagged with name = templated-pod (Line 6 & 7), via the port 80 (Line 9 & 10)

Apply it:

$ kubectl apply -f service-loadbalancer.yml
Enter fullscreen mode Exit fullscreen mode

After applying the Load balancer, you can start to see your application, by browsing http://192.168.1.33 (the Master kube’s IP), great!

Auto-scale pods with Horizontal Pod Autoscaler

Let’s tune our infrastructure by adding a Horizontal Pod Autoscaler. It will automatic scale up / down our Pods depends on CPU/RAM/… usage. Let’s say you want to add 10 minions when traffic is high (CPU > 50%, for example), and reduce to just 3 minions when traffic is low.

Create file hpa.yml

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: hpa
spec:
  maxReplicas: 10
  minReplicas: 3
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: deployment
  targetCPUUtilizationPercentage: 50
Enter fullscreen mode Exit fullscreen mode

Nothing special here:

  • Line 6: Max minions we’re willing to have
  • Line 7: Minimum minions to serve the website
  • Line 10-11: Select the target for scaling: we select the Deployment controller, named deployment, as created above

Apply it:

$ kubectl apply -f hpa.yml
Enter fullscreen mode Exit fullscreen mode

Play time

Open the Kubernetes dashboard we installed above and start monitoring our k8s:

Deployment: Try to make a change to the index.php file, then rebuild & push it with tag v2. Then change the deployment.yml (Line 32) to use 192.168.1.33:5000/martinpham/kphp:v2 & apply it. You will see k8s creates 2 pods with new version, while keeping 1 pod with old version & deleting 2 old pods. When finishes, the last old pod will be deleted & another new pod will be created. No downtime during rollout!

Stress-test: Try to send a lot of traffic into the Load balancer endpoint:

$ ab -n 100000 -c 10000 http://192.168.1.33/
Enter fullscreen mode Exit fullscreen mode

You will see k8s monitor all the pod’s cpu, when it crosses 50%, k8s will create up to 10 pods to serve the traffic. Then try to stop the test & wait some moment to see k8s kills the pods when they’re not needed anymore. Everything will be done automatically!

I’ve created a repository contains all the code we talked about here:

https://gitlab.com/martinpham/kubernetes-fun

Have fun! Thank you very much for following my first dev.to tutorial!


Ps.. Small update: I've create an additional tutorial for HTTPS on top of our infrastructure here, hope you like it!

Top comments (4)

Collapse
 
thorstenhirsch profile image
Thorsten Hirsch

Hi Martin. Great tutorial, thank you! Now that you've shown how to setup an http service on port 80 here's an idea for another chapter: How would you add TLS on top? I am especially interested in how to handle TLS certificates. I guess putting them in the nginx image is not the best option, because one would have to rebuild the image every time the certificate expires.

Collapse
 
martinpham profile image
Martin Pham

Thanks for reading! I've written another tutorial for the TLS here: dev.to/martinpham/secure-your-kube...
Hope you like it!

Collapse
 
thorstenhirsch profile image
Thorsten Hirsch

Yay, just what I needed!

Thread Thread
 
martinpham profile image
Martin Pham

:)