DEV Community

Cover image for CI/CD to automate deployments to Kubernetes on DigitalOcean using Github Actions
Prince Agrawal
Prince Agrawal

Posted on

CI/CD to automate deployments to Kubernetes on DigitalOcean using Github Actions

Introduction

In today's fast-paced development environment, having a CI/CD pipeline set up to automatically run tests, check project builds, and manage deployments can save significant time and enhance scalability. It has become a necessity for any well-maintained project. In this article, you will set up a comprehensive CI/CD pipeline for a Node.js typescript application, ultimately deploying it to a Kubernetes cluster on DigitalOcean.

Here's what we'll cover:

  1. Create a basic api in nodejs
  2. Converting to a typescript app
  3. Dockerizing the API
  4. Push the code to github
  5. CI/CD setup with github actions
  6. Kubernetes cluster creation on DigitalOcean
  7. Setting up access control for our cluster
  8. Creating kubernetes Deployment and Service
  9. Final changes to github action for automatic deployment to the kubernetes cluster

Github Project Repository

Side note: There will be a second part of this article.

Prerequisites

In order to follow along, you will need:

  1. Digitalocean account to create Kubernetes cluster.
  2. Github account for code hosting and running actions.
  3. Docker installed on the system, also create an account on DockerHub, we will use this to store our Docker images.
  4. kubectl the kubernetes command line tool for controlling kubernetes cluster

1. Basic API in nodejs

So first of all we need an application. Let's create a very basic server in Nodejs.

create a new directory

mkdir node-api
Enter fullscreen mode Exit fullscreen mode

go inside that directory

cd node-api
Enter fullscreen mode Exit fullscreen mode

Initialize a npm project

npm init -y
Enter fullscreen mode Exit fullscreen mode

Install express, it is a node.js framework for api development

npm i express
Enter fullscreen mode Exit fullscreen mode

create an app.js file at the root directory of the project and insert the following in it

const express = require("express");

const app = express();

app.use("/", (req, res) => {
    res.send("Api is running...")
});

app.listen(4000, () => {
    console.log("Server is ready at http://localhost:4000");
})
Enter fullscreen mode Exit fullscreen mode

This is just creating a simple express server and after running node app.js in the terminal if we go to http://localhost:4000 you can verify it is live.

2. Converting to a typescript app

Let's convert this into typescript now.

  • change the extension of app.js to app.ts
  • Install
npm i -D typescript @types/express @types/node
Enter fullscreen mode Exit fullscreen mode

We are installing these as dev dependencies so we use -D

Next, generate tsconfig.json file

npx tsc --init
Enter fullscreen mode Exit fullscreen mode

You may notice some errors appeared in the app.ts file, this is because of not specifying proper types, let's fix these. Change the app.ts file to the following

import express, { Express, Request, Response } from "express";

const app: Express = express();

app.use("/", (req: Request, res: Response) => {
    res.send("Api is running...")
});

app.listen(4000, () => {
    console.log("Server is ready at http://localhost:4000");
})
Enter fullscreen mode Exit fullscreen mode

The implicit type error would be now gone, but there is still a warning about 'req' being declared but its value is never read, we can ignore that for now.

Let's change the built directory in the tsconfig file

"outDir": "./built"
Enter fullscreen mode Exit fullscreen mode

Now let's try to run the app again, we use ts-node-dev

./node_modules/.bin/ts-node-dev app.ts
Enter fullscreen mode Exit fullscreen mode

You should get Server is ready at http://localhost:4000 in the terminal.

Let's also build the app, this is what happens in a production environment

./node_modules/typescript/bin/tsc
Enter fullscreen mode Exit fullscreen mode

This compiles the entire project typescript to javascript. A new directory called built is created, as we specified above in the tsconfig file. In that, you will find an app.js file. That you run through

node ./built/app.js
Enter fullscreen mode Exit fullscreen mode

Again go to localhost:4000, the API should be running

Now let's set up some scripts so we don't have to run these packages through the node_modules directory.
In the package.json, add the following scripts

"scripts": {
  "dev": "ts-node-dev --poll ./app.ts",
  "build": "tsc",
  "start": "npm run build && node built/app.js"
},
Enter fullscreen mode Exit fullscreen mode

We added three scripts, one for running the project in development mode, one for building, and then a start script for both building and running the compiled JS code. The --poll flag in ts-node-dev constantly watches for file changes and ensures automatic server restarts, making it particularly advantageous in a containerized environment.

Now you can just run the scripts using npm run SCRIPT_NAME, as

npm run dev
Enter fullscreen mode Exit fullscreen mode

Visit http://localhost:4000 to ensure api is running.

3. Dockerizing the API

Simple, just create a file named Dockerfile (make sure the name is exactly this) in the root directory and add the following to it

FROM node:20.13.1-alpine3.18

WORKDIR /app

COPY package*.json ./

RUN npm install

COPY . .

RUN npm run build

# 

EXPOSE 4000

CMD [ "node", "./built/app.js" ]
Enter fullscreen mode Exit fullscreen mode
  • FROM node:20.13.1-alpine3.18: Specifies the base image for the Docker container, which is Node.js version 20.13.1 on the Alpine 3.18 Linux distribution.

  • WORKDIR /app: Sets the working directory inside the container to /app. All subsequent commands will be run from this directory.

  • COPY package*.json ./: Copies package.json and package-lock.json from the host machine to the /app directory in the container.

  • RUN npm install: Installs the dependencies listed in package.json.

  • COPY . .: Copies all files and directories from the host machineโ€™s current directory to the /app directory in the container.

  • RUN npm run build: Runs the build script defined in package.json, typically used to compile or bundle the application.

  • EXPOSE 4000: Informs Docker that the container will listen on port 4000 at runtime.

  • CMD [ "node", "./built/app.js" ]: Specifies the command to run the application when the container starts. Here, it runs Node.js to execute ./built/app.js.

Because we do not want to copy unnecessary files into our production container, we will create a .dockerignore file and add the following:

node_modules
built
Enter fullscreen mode Exit fullscreen mode

Now let's build the docker image, First, make sure the docker daemon is running

sudo systemctl start docker
Enter fullscreen mode Exit fullscreen mode

now run the following command in the terminal to build the docker image, also replace prkagrawal with your username from DockerHub, this will be useful for pushing the image to DockerHub

sudo docker build -t prkagrawal/node-api .
Enter fullscreen mode Exit fullscreen mode
  • sudo: Runs the command with superuser (root) privileges. (Docker daemon always runs as root user, so we need to run docker commands from root user otherwise do some configurations to run it without sudo preface)
  • docker build: Instructs Docker to build a new image from the Dockerfile in the current directory.
  • -t prkagrawal/node-api: Tags the image with the name prkagrawal/node-api. The -t flag is used to name and optionally tag the image in the name:tag format. Here, the name is prkagrawal/node-api, and if no specific tag is provided, it defaults to latest, which is the case here.
  • . Specifies the build context, which is the current directory (.). Docker uses the files in this directory to build the image.

In the terminal, you should see, that the 12-digit ID will be different

Successfully built 4a3270cc3c16
Successfully tagged prkagrawal/node-api:latest
Enter fullscreen mode Exit fullscreen mode

If you run the following command to check docker images it should show up:

sudo docker images
Enter fullscreen mode Exit fullscreen mode

Now let's run this image using the following:

sudo docker run -it -p 4000:4000 prkagrawal/node-api
Enter fullscreen mode Exit fullscreen mode

and if you go to http://localhost:4000 you should see the message Api is running...

Now if the process does not exit using Ctrl+C then use this, in another terminal

sudo docker ps # get the id of the running container
sudo docker stop <CONTAINER ID> # kill it (gracefully)
Enter fullscreen mode Exit fullscreen mode

Multistage build

Now let's reduce the size of the docker image using a multi-stage build, replace Dockerfile contents with the following

# Stage 1: Build
FROM node:20.13.1-alpine3.18 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build

# Stage 2: Run
FROM node:20.13.1-alpine3.18
WORKDIR /app
COPY --from=builder /app/built ./built
COPY package*.json ./
RUN npm install --only=production
EXPOSE 4000
CMD [ "node", "./built/app.js" ]
Enter fullscreen mode Exit fullscreen mode
  • COPY --from=builder /app/built ./built: Copies only the compiled artifacts from the builder stage.
  • RUN npm install --only=production: Installs only production dependencies.

Now if you run the build command sudo docker build -t prkagrawal/node-api . and check the image size, it should be significantly smaller.

  • Multi-stage builds in Docker allow you to use multiple FROM statements in your Dockerfile, each defining a separate stage. This approach helps in separating the build environment from the runtime environment, which can significantly reduce the size of the final image. Although here we are using the same images in both.

  • In a normal build, everything required for building and running the application is included in a single image. This means that all build tools, dependencies, and artifacts are part of the final image, even if they are not needed at runtime.

In a multi-stage build, you can:

  • Isolate Build Dependencies: Use a larger base image with all the necessary build tools and dependencies for compiling the application.
  • Copy Only Necessary Artifacts: After building the application, you copy only the essential files (like compiled binaries or production-ready code) into a new, minimal image that has only the runtime dependencies.

Now let's run this image using:

sudo docker run -it -p 4000:4000 prkagrawal/node-api
Enter fullscreen mode Exit fullscreen mode

and go to http://localhost:4000 in your browser, you should see the Api is running... message

Now close the docker process using the way described above and let's push this to dockerhub. Login to docker using:

sudo docker login
Enter fullscreen mode Exit fullscreen mode

and insert your dockerhub username and password. You should see the Login Succeeded message in the terminal.
Now just run the following command to push the image to dockerhub, it automatically creates a public repo and pushes the image to it:

sudo docker push prkagrawal/node-api
Enter fullscreen mode Exit fullscreen mode
Optional: If you want to push to a private repo, go to dockerhub, and create a private repo. Ideally, the repo and image names should be the same, otherwise, make sure to tag them properly with
sudo docker tag <IMAGE_NAME> DOCKERHUB_USERNAME/DOCKERHUB_REPO_NAME
Enter fullscreen mode Exit fullscreen mode

we have been using node-api as IMAGE_NAME and replacing the username and repo names with their corresponding values. Then run the push command

sudo docker push DOCKERHUB_USERNAME/DOCKERHUB_REPO_NAME
Enter fullscreen mode Exit fullscreen mode

4. Pushing the code to github

In the root directory, in the terminal initialize a new git repo with the following command:

git init
Enter fullscreen mode Exit fullscreen mode

We also need a .gitignore file, to avoid pushing node packages, build files, env vars(currently we don't have any), and various other files that do not need to go to the repo. So create a file named .gitignore and paste the following in it:

node_modules
built
Enter fullscreen mode Exit fullscreen mode

First, add all the files in the root directory to the git staging

git add .
Enter fullscreen mode Exit fullscreen mode

Then commit these files to your repository:

git commit -m "initial commit"
Enter fullscreen mode Exit fullscreen mode

Now go to github.com, and create a repository for this project, I created one called nodejs-kubernetes-do-cicd, be sure to replace this with your GitHub repository name. Then
go to your repo on GitHub, there should be instructions on how to push an existing repository from the command line

They would be like:

git remote add origin git@github.com:prkagrawal/nodejs-kubernetes-do-cicd.git
git branch -M main
git push -u origin main
Enter fullscreen mode Exit fullscreen mode

The first command adds a remote called origin at the repo address, the second renames the current branch to main, and then the third one pushes the files to Github. If you don't have a GitHub SSH setup, you will be asked for a username and password for your GitHub account in this step.

5. CI/CD setup with GitHub action

Go to your GitHub repo and add the docker image action as shown below

Adding docker image action

This also gives us a starting point to build upon. Now pull the latest changes to your local repo using:

git pull
Enter fullscreen mode Exit fullscreen mode

Update the .github/workflows/deploy-to-kubernetes-on-digitalocean.yml file to the following:

name: deploy-to-kubernetes-on-digitalocean # Name of the GitHub Actions workflow

on:
  push:
    branches: [ "main" ] # Trigger the workflow on push events to the main branch
  pull_request:
    branches: [ "main" ] # Trigger the workflow on pull requests targeting the main branch

env:
  IMAGE_NAME: prkagrawal/node-api # image name
  IMAGE_TAG: ${{ github.sha }} # get the commit SHA from the GitHub context (useful for tagging the Docker image because it's unique)

jobs:

  build: # Define a job named 'build'

    runs-on: ubuntu-latest # Specify the runner to use for the job, here it's the latest version of Ubuntu

    steps:
    - uses: actions/checkout@v4 # Step to check out the repository code using the checkout action

    - name: Build the Docker image # Step name
      run: docker build -t "$IMAGE_NAME:$IMAGE_TAG" . # build the Docker image using envs defined above

    # login to dockerhub then push the image to the dockerhub repo
    - name: Push Docker image
      run: |-
        echo ${{secrets.DOCKERHUB_PASS}} | docker login -u ${{secrets.DOCKERHUB_USERNAME}} --password-stdin
        docker push "$IMAGE_NAME:$IMAGE_TAG"
Enter fullscreen mode Exit fullscreen mode

Make sure the indentation is correct, otherwise, you will get an error, yml is strict about it. Other than that it is an easy-to-use and human-readable format.

So, I have added some comments for what each step does in the file and also renamed it to deploy-to-kubernetes-on-digitalocean.yml. The filename doesn't matter, you can keep it whatever you want.

Also defined are some environment variables at the top, IMAGE_NAME so we can reuse it, and IMAGE_TAG to tag the docker image to the latest commit sha. The tagging is important because it is used to uniquely identify an image (combination of filesystem layers). When a user pulls an image they know which version they are pulling from the tag. So, if the tag is the same in the kubernetes deployment it will not pull again even when the policy is set to pull always, because according to it, that version of the image is already there. So, we update the tag to update the version of the image.

Now we are going to use these envs to build our docker image, using the same old build command. Finally added one more step to login to dockerhub and then push the image to dockerhub repo.

Although we are using echo on the password, GitHub redacts the secrets from getting logged and then its value is piped to the docker login command which reads it as an input.

Now before we push this to see our action in action, we need to add DOCKERHUB_USERNAME and DOCKERHUB_PASS to GitHub action secrets. Use the following process and replace the values with your own:

Adding DOCKERHUB secrets in github repo

Sidenote: my dockerhub password is just a dummy one, so don't try to use it that won't work, use your password

Now, go ahead and commit the changes and push the updates to Gihtub, and see our action in action.

git add .
git commit -m "updated workflow action to deploy docker image to dockerhub"
git push
Enter fullscreen mode Exit fullscreen mode

After the action run completes, you can go to dockerhub and should be able to see the latest commit-sha tagged image.

6. Kubernetes Cluster creation on digitalocean

Now let's create a kubernetes cluster on digitalocean if you don't already have one. Go to kubernetes Page after logging in to your account.

creating digital ocean cluster

choose a data center nearest to you, and then you have to build a node pool. On digitalocean a node is a droplet. I am just selecting 1 node with the lowest config since that will be enough in this case. Choose a name for the cluster, or you can leave it as it is and then there is an option to add tags for different environments but for now, I kept it empty.

Then click the create cluster button, it will take some time to get created. After that, there are two ways to connect to the cluster - Automated and manual. Automated uses doctl which is digital ocean cli, you will have to download it first. Here I am going to be using the Manual method. Go to the manual tab and download the cluster configuration file.

Now you can run a command on a cluster like this

kubectl --kubeconfig ~/Downloads/node-api-kubeconfig.yaml get pods
Enter fullscreen mode Exit fullscreen mode

We are using kubectl for running the command and passing its path to the config file through --kubeconfig flag.

You should see an output like this

Verify connection to k8s cluster

This means we were able to connect to the cluster and no resources were found because the cluster was just created and there are no pods on it.

You can also run the following command to get all the resources on the cluster:

kubectl --kubeconfig ~/Downloads/node-api-kubeconfig.yaml get all
Enter fullscreen mode Exit fullscreen mode

Get all resources

This is the default way to run commands with kubectl by passing --kubeconfig flag and path to the config file on all commands. But if you are not running other kubernetes clusters then you can copy the kubeconfig file to a folder on your home directory called .kube (you can make that in case it does not exist). Then just rename it to config. Then you can just run the commands as

kubectl get pods
Enter fullscreen mode Exit fullscreen mode

and should see the same message as above. Here we are going to be passing the --kubeconfig flag on all commands, just remove that part if you moved and renamed the config file.

Also, we are going to be using the default namespace for all operations. Namespaces are a way to isolate groups of resources within a cluster. Mainly intended for use in environments with multiple users and projects.

7. Setting up access control for our cluster

Anyway, now we need to set up access control on kubernetes, using access control we can manage which applications and users are either allowed or denied certain access or permissions. Till now we have been using the default admin user to authenticate to our cluster. If this gets compromised our whole cluster will be compromised so instead we are going to use RBAC (Role-based Access Control) with a service account with specific roles.

We start with creating a cluster user(Service Account), then create a Role in which we specify which permissions it has on the cluster. Finally, a Role Binding is used to link a Service Account to a Role.

Let's get started, and create a file called api-service-account.yaml in a separate folder on your system, this directory can be used for keeping kubernetes related files, configs, etc, and input the following in it

apiVersion: v1
kind: ServiceAccount
metadata:
  name: api-service-account
  namespace: default
Enter fullscreen mode Exit fullscreen mode

All kubernetes configs are in yaml format and kubernetes resources are defined by their "kind." Each kind represents a specific type of resource within the Kubernetes API. In this, it is a service account. The metadata field is used to add more info, here we are giving this ServiceAccount the name api-service-account and using the default namespace.

Apply this using kubectl, don't forget to replace the config path or remove the flag altogether:

kubectl --kubeconfig ~/Downloads/node-api-kubeconfig.yaml apply -f ~/kube-general/api-service-account.yaml
Enter fullscreen mode Exit fullscreen mode

You should see an output like this:
Service Account Created

The kubectl apply -f command is used in Kubernetes to create or update resources defined in a configuration file.

Next, create a Role that specifies the permissions for the Service Account. Roles are namespace-scoped, so they grant permissions within a specific namespace. Create a file named api-role.yaml

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: api-role
  namespace: default
rules:
  - apiGroups: ["", "apps", "batch", "extensions", "networking.k8s.io"]
    resources: ["deployments", "services", "replicasets", "pods", "jobs", "cronjobs", "ingresses"]
    verbs: ["*"]

Enter fullscreen mode Exit fullscreen mode

Then apply it using:

kubectl --kubeconfig ~/Downloads/node-api-kubeconfig.yaml apply -f ~/kube-general/api-role.yaml
Enter fullscreen mode Exit fullscreen mode

The output will be like:
Role created

Finally, create role binding to link the service account and role, create a file named api-role-binding.yaml

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: api-role-binding
  namespace: default
subjects:
  - kind: ServiceAccount
    name: api-service-account
    namespace: default
roleRef:
  kind: Role
  name: api-role
  apiGroup: rbac.authorization.k8s.io
Enter fullscreen mode Exit fullscreen mode

Apply it:

kubectl --kubeconfig ~/Downloads/node-api-kubeconfig.yaml apply -f ~/kube-general/api-role-binding.yaml
Enter fullscreen mode Exit fullscreen mode

And you will see
Role Binding created

Now if you run the command to get the service accounts:

kubectl --kubeconfig ~/Downloads/node-api-kubeconfig.yaml get sa
Enter fullscreen mode Exit fullscreen mode

get service account using kubectl

you can see we do have our created account but no secret associated with it, that we can use to authenticate. Let's create one, create one more file named api-secret.yaml, and insert the following in it:

apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
  name: api-secret
  annotations:
    kubernetes.io/service-account.name: "api-service-account"
Enter fullscreen mode Exit fullscreen mode

Then apply it:

kubectl --kubeconfig ~/Downloads/node-api-kubeconfig.yaml apply -f ~/kube-general/api-secret.yaml
Enter fullscreen mode Exit fullscreen mode

You should see the secret-created output
Secret created

Now if we describe the Secret, we can see that a token was generated for it:

kubectl --kubeconfig ~/Downloads/node-api-kubeconfig.yaml describe secret api-secret
Enter fullscreen mode Exit fullscreen mode

There are fields like name, namespace, labels, annotations, and type also there is a token and ca.crt these two will be useful for connecting to our cluster without kubeconfig, now let's verify we can connect to our cluster using this token using the following command, replace server-url-from-config with the server url from the kubeconfig file that you downloaded and token-value with your token:

kubectl --insecure-skip-tls-verify --kubeconfig="/dev/null" --server=server-url-from-config --token=token-value get pods
Enter fullscreen mode Exit fullscreen mode

Using --kubeconfig="/dev/null" ensures that kubectl ignores your existing config file and credentials, instead relying solely on the provided token, also notice we are adding flag --insecure-skip-tls-verify, this flag in kubectl is used to bypass the certificate validation step when making HTTPS requests to the Kubernetes API server. This means that kubectl will not check if the server's certificate is signed by a trusted Certificate Authority (CA), which can be useful in certain scenarios, such as testing and development environments.

To not have to use this flag we will have to pass the ca.crt (certified authority certificate) flag to kubectl commands, which we will set up in our github workflow. The certificate can be found in the kubeconfig file, it's the certificate-authority-data field.

8. Creating kubernetes Deployment and Service

These are some common kubernetes terms:

  • Cluster: Imagine your entire Kubernetes setup as a cluster, a collection of nodes working together. This is the foundation of your infrastructure.

  • Nodes: Each node is a worker machine (virtual or physical) in your cluster. Nodes run your applications through pods. For example, a node could be a virtual machine in the cloud.

  • Pods: The smallest deployable units in Kubernetes, representing an instance of your application. For instance, a pod might run a web server. Pods can have one or more containers that share storage and network.

  • Deployments: Manage and maintain your pods, ensuring a specified number is always running. If a pod fails, the Deployment replaces it. Think of it as an automated way to keep your web server instances running.

  • Services: Provide stable network access to pods, offering a single IP address and DNS name to access them. Services also load balance traffic across the pods. For example, users access your web server through a Service.

  • Ingress: Manages external access to Services, typically via HTTP. It routes incoming traffic to the correct Service based on the request's URL. For instance, my-app.example.com might be routed to your web server Service.

  • ConfigMaps and Secrets: Store configuration data and sensitive information (like passwords), respectively. They decouple environment-specific configurations from your container images, making it easier to manage configurations. For example, your web server might read its database connection string from a ConfigMap and its password from a Secret.

So now let's create Deployment, start with creating a directory called k8s in the root of the project, inside it, we will store all the kubernetes related stuff. Inside the k8s directory create a file named deployment.yaml and paste the following in it:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: node-api
  namespace: default
  labels:
    app: node-api
spec:
  replicas: 1
  selector:
    matchLabels:
      app: node-api
  template:
    metadata:
      labels:
        app: node-api
    spec:
      containers:
        - name: container-name
          image: prkagrawal/node-api:latest
          ports:
            - containerPort: 4000
Enter fullscreen mode Exit fullscreen mode

There are 2 main parts here, metadata and spec. Let's start with the spec, the template field in spec defines the blueprint for pods, it is the configuration of pods within the configuration of deployment. It has its metadata and spec and in the specification of pods, we have the definition of containers with the same fieldname. A pod can have one or more containers but mostly one main application per pod. Here we define which image will be used to create the pod, in our case it will be the pushed image from DockerHub. The containerPort is the exposed port from the Dockerfile on which our container will listen.

Now we also have 2 labels and one matchLabel, in kubernetes, we can give any component a label. Labels are key/value pair attached to kubernetes resources, they act as identifiers but are not unique, here all the pods created using in deployment config will have the same label. This way we can identify all the pods of the same application using the label because names of the pods will be unique. So, for pods label is a required field while in deployment it is optional but a good practice.

Now how does kubernetes know which pods belong to which deployment, that is what the selector in deployment spec is for. So all the pods that match the app: node-api label are in this deployment. You can select any key: value pairs, it is just standard practice to use the app key in labels.

The apiVersion field specifies the version of the Kubernetes API that you are using to create the object and the kind field specifies the type of Kubernetes object you are defining, here that is Deployment.

Now let's create a service, and make a file named service.yaml inside the k8s directory and paste the following in it:

apiVersion: v1
kind: Service
metadata:
  name: service-name
  namespace: default
  labels:
    app: node-api
spec:
  type: NodePort
  ports:
    - protocol: TCP
      port: 4000
      targetPort: 4000
      nodePort: 30200
  selector:
    app: node-api
Enter fullscreen mode Exit fullscreen mode

Now the Service also has two main sections, metadata and spec, the selector in spec is used to define which pods belong to the service, so it should match the label of the pods. Service will then know it can forward requests to these pods.

Service is accessible in the cluster using its IP address and port, in which the port can be anything 80, 8080, 3000, etc. Now we have the targetPort which is the port of the pods that belong to the service and it should be the same as the containerPort because that's where the service should forward the request to.

Now there are 2 types of services - internal and external. And we want to be able to access our app from a web browser, so we need an external service. That is what type in spec is used to define, by default it will be an internal service of type ClusterIP if the type is not there. Here we define it as NodePort which is an external service type and it requires a third port which is called nodePort in ports. This is the port where our application will be accessible on the IP address of k8s nodes. So we will access the service at nodeip:nodePort which will then access the pods behind it.

The range for nodePort is 30000-32767 in k8s, the nodePort can be any value inside that.

With these configs, we can now create the corresponding resources in kubernetes. Let's do that, run the following command:

kubectl --kubeconfig ~/Downloads/node-api-kubeconfig.yaml apply -f k8s
Enter fullscreen mode Exit fullscreen mode

You should see the following output
Deployment and Service created

Now let's check all the resources in our cluster

kubectl --kubeconfig ~/Downloads/node-api-kubeconfig.yaml get all
Enter fullscreen mode Exit fullscreen mode

Also, let's verify if our api is running by forwarding one port of the local machine to the port that the service is exposing, copy the name of the pod which is the string after pod/ -> node-api-568dc945d7-928qf something like this, and run the following command:

kubectl --kubeconfig ~/Downloads/node-api-kubeconfig.yaml port-forward node-api-568dc945d7-928qf 4000:4000
Enter fullscreen mode Exit fullscreen mode

Oops, we got an error, what went wrong, it is saying the pod is not running, why is it not running?

A pod is the smallest deployable unit in k8s, just a layer over our application container. Where is our application container defined, in the deployment.yaml file? Now it is using the deployed image from our dockerhub. And if you remember earlier we tagged our image with github commit sha, but in deployment, we used the latest tag, which doesn't exist, so no image, no container, and nothing for the pod to run. Ok, for now, let's just go to the dockerhub repo for our image and get the tag from there then replace the latest in the deployment.yaml with it. The image field will look something like this

...
image: prkagrawal/node-api:5c6b2aec516f194b97af3eea02cdab3ed0aa498b
...
Enter fullscreen mode Exit fullscreen mode

After making the change, we again have to apply the k8s configs, run the command

kubectl --kubeconfig ~/Downloads/node-api-kubeconfig.yaml apply -f k8s
Enter fullscreen mode Exit fullscreen mode

deployment.apps/node-api created
service/service-name created

Update after tag fix

Now once more get the pod name with get pods, and run the command for port forwarding

kubectl --kubeconfig ~/Downloads/node-api-kubeconfig.yaml port-forward node-api-568dc945d7-928qf 4000:4000
Enter fullscreen mode Exit fullscreen mode

You should see an output like this now

Port forwarded

And if you go to http://localhost:4000, you can see the Api is running... message
Port forwarded output

Now, we don't want to manually do the tag replacement after every commit in a production environment, let's set up a script for that. Create a new directory in the root named scripts and inside it, a file named update-tag.sh and paste the following into it:

#!/bin/bash

COMMIT_SHA1=$1

# Define the desired commit SHA value
NEW_COMMIT_SHA=$COMMIT_SHA1

# Export the commit SHA as an environment variable
export COMMIT_SHA1="$NEW_COMMIT_SHA"

# Use envsubst to replace the placeholder in the Deployment YAML file
envsubst '$COMMIT_SHA1' < k8s/deployment.yaml > temp.yaml && mv temp.yaml k8s/deployment.yaml
Enter fullscreen mode Exit fullscreen mode

We pass the sha1 to bash script as an argument, then store it in a new sha variable the export it to make it available to envsubst. Then update the original deployment.yaml file with the substitution applied, without creating a new file. It uses a temporary file (temp.yaml) to store the modified content and then renames it back to deployment.yaml after the update.

Now make it executable

chmod +x scripts/update-tag.sh
Enter fullscreen mode Exit fullscreen mode

One more thing, let's also update the tag value in our deployment.yaml file to the variable name $COMMIT_SHA1, the image looks like this:

image: prkagrawal/$COMMIT_SHA1
Enter fullscreen mode Exit fullscreen mode

You can verify the script is working by running it

./scripts/update-tag.sh tag-val
Enter fullscreen mode Exit fullscreen mode

The $COMMIT_SHA1 in deployment.yaml should have been replaced. Now transform it back to the variable, we will add this functionality through the Github actions workflow.

9. Final changes to github action for automatic deployment to the kubernetes cluster.

Now go to the .github/workflows/deploy-to-kubernetes-on-digitalocean.yml file and paste the following at the end after the previous step

- name: Install envsubst
  run: |-
    sudo apt-get update && sudo apt-get -y install gettext-base

- name: Install kubectl
  run: |-
    curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
    chmod u+x ./kubectl

- name: Substitute variables in deployment.yaml by running script
  run: |-
    ./scripts/update-tag.sh "$IMAGE_TAG" ${{secrets.KUBERNETES_CLUSTER_CERTIFICATE}} ${{secrets.KUBERNETES_SERVER}} ${{secrets.KUBERNETES_TOKEN}}
Enter fullscreen mode Exit fullscreen mode

We added three new steps, one for installing envsubst which is part of gettext-base then one for installing kubectl finally the last one for replacing the tag using commit sha-1 in deployment.yaml file.

Finally, let's also set up a script to deploy to kubernetes and it is as simple as applying the configs in the k8s directory to create deployment and service. For that we need to add three more variables to the github repo action secrets, they are KUBERNETES_TOKEN, KUBERNETES_SERVER, and KUBERNETES_CLUSTER_CERTIFICATE.

The value of KUBERNETES_TOKEN will be the token we used earlier to authenticate the service account user, retrieve it by:

kubectl --kubeconfig ~/Downloads/node-api-kubeconfig.yaml describe secret api-secret
Enter fullscreen mode Exit fullscreen mode

From out copy the value of a token variable without any spaces at ends, then go Github repo and add this as an action secret. Follow the same steps we used earlier to add dockerhub secrets.

The value of KUBERNETES_SERVER is the --server flag that we passed earlier to verify connection to the cluster without kubeconfig, it can be found in the kubeconfig file you downloaded after creating the cluster on digitalocean. And KUBERNETES_CLUSTER_CERTIFICATE is also available on this config file, it's the certificate-authority-data field. It should be a long string, copy all of it. Then go to the github repo and add these as secrets as well.

Now add this step to the .github/workflows/deploy-to-kubernetes-on-digitalocean.yml file:

- name: Deploy to Kubernetes
  run: |-
    echo ${{secrets.KUBERNETES_CLUSTER_CERTIFICATE}} | base64 --decode > cert.crt
    ./kubectl \
      --kubeconfig=/dev/null \
      --server=${{secrets.KUBERNETES_SERVER}} \
      --certificate-authority=cert.crt \
      --token=${{secrets.KUBERNETES_TOKEN}} \
      apply -f ./k8s/
Enter fullscreen mode Exit fullscreen mode

Your completed deploy-to-kubernetes-on-digitalocean file will look like this:

name: deploy-to-kubernetes-on-digitalocean # Name of the GitHub Actions workflow

on:
  push:
    branches: [ "main" ] # Trigger the workflow on push events to the main branch
  pull_request:
    branches: [ "main" ] # Trigger the workflow on pull requests targeting the main branch

env:
  IMAGE_NAME: prkagrawal/node-api # image name
  IMAGE_TAG: ${{ github.sha }} # get the commit SHA from the GitHub context (useful for tagging the Docker image because it's unique)

jobs:

  build: # Define a job named 'build'

    runs-on: ubuntu-latest # Specify the runner to use for the job, here it's the latest version of Ubuntu

    steps:
    - uses: actions/checkout@v4 # Step to check out the repository code using the checkout action

    - name: Build the Docker image # Step name
      run: docker build -t "$IMAGE_NAME:$IMAGE_TAG" . # build the Docker image using envs defined above

    # login to dockerhub then push the image to the dockerhub repo
    - name: Push Docker image
      run: |-
        echo ${{secrets.DOCKERHUB_PASS}} | docker login -u ${{secrets.DOCKERHUB_USERNAME}} --password-stdin
        docker push "$IMAGE_NAME:$IMAGE_TAG"

    - name: Install envsubst
      run: |-
        sudo apt-get update && sudo apt-get -y install gettext-base

    - name: Install kubectl
      run: |-
        curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
        chmod u+x ./kubectl

    - name: Substitute variables in deployment.yaml by running script
      run: |-
        ./scripts/update-tag.sh "$IMAGE_TAG"

    - name: Deploy to Kubernetes
      run: |-
        echo ${{secrets.KUBERNETES_CLUSTER_CERTIFICATE}} | base64 --decode > cert.crt
        ./kubectl \
          --kubeconfig=/dev/null \
          --server=${{secrets.KUBERNETES_SERVER}} \
          --certificate-authority=cert.crt \
          --token=${{secrets.KUBERNETES_TOKEN}} \
          apply -f ./k8s/
Enter fullscreen mode Exit fullscreen mode

We have simply added a step to first decode the cluster certificate as it is in base64 then save it to cert.crt file then a step to apply configs in the k8s directory using token and certificate for verification.

Now add all these changes then commit and push them to Github:

git add .
git commit -m "deployment cicd setup"
git push
Enter fullscreen mode Exit fullscreen mode

Moment of truth, we have finally deployed to kubernetes using github actions. Now let's check if we can access our api. Remember service exposes the node in the form of node-ip:nodePort, we can access that, and get our node details using the:

kubectl --kubeconfig ~/Downloads/node-api-kubeconfig.yaml get nodes -o wide
Enter fullscreen mode Exit fullscreen mode

-o wide flag is used to get additional details about a resource. Now copy the EXTERNAL-IP of the node, and nodePort is the one we used earlier in our service.yaml file which is 30200 then go to your web browser and paste it as http://EXTERNAL-IP:nodePort which for me is http://139.59.29.96:30200, you should see the Api is running...

Api running on nodeip:nodePort

Conclusion

Now we have a fully functioning ci-cd pipeline in place, which on push or pull request to the main branch automatically builds a docker image, pushes it to dockerhub, and then deploys to the kubernetes cluster on digitalocean.

Using NodePort, each service requires users to access different node IP addresses and ports. While this is fine for testing and development it is not user-friendly. In the next part of this article, we will set up ingress to provide a single point of access to multiple services within the cluster using a single external IP address and configure a domain name with it.

Top comments (2)

Collapse
 
shikha-singh-24 profile image
shikha-singh-24

Gifs are kind of blurry, can you upload them somewhere and share link? Nice work, very detailed!

Collapse
 
prkagrawal profile image
Prince Agrawal • Edited

Thanks, originally recorded gifs are in the Github repo - github.com/prkagrawal/nodejs-kuber...