INTRODUCTION
In the world of modern software development, delivering applications rapidly and reliably is paramount. Continuous Integration and Continuous Deployment (CI/CD) practices streamline the development lifecycle, enabling teams to automate building, testing, and deploying applications. When coupled with the power of Kubernetes, a robust container orchestration platform, the efficiency and scalability of your applications reach new heights.
In this comprehensive tutorial, I will walk you through the process of automating CI/CD for your applications on an Azure Kubernetes Service (AKS) cluster. You'll learn how to set up a complete pipeline that connects your GitHub repository to your AKS cluster, enabling automatic building, testing, and deployment of your containerized applications. Whether you're new to Kubernetes and CI/CD or looking to refine your skills, this guide has you covered.
Prerequisites:
Before diving into the tutorial, ensure you have the following prerequisites in place:
GitHub Account: You'll need an active GitHub account to host your application's source code and set up the pipeline for CI/CD.
Azure Account: You'll require an Azure account with either a free subscription or a pay-as-you-go subscription. If you're new to Azure, you can take advantage of the $200 free trial credit for the first month to explore and experiment with AKS and other Azure services.
Azure DevOps Account: To seamlessly integrate your CI/CD pipeline, an Azure DevOps account is necessary. This account will allow you to configure the automation process and manage the flow of changes from source code to the AKS cluster.
Outline:
Throughout this tutorial, we'll cover the following key topics:
-
Creating an Azure Kubernetes Cluster:
- Understand the benefits of using AKS for container orchestration.
- Step-by-step guide to creating an AKS cluster in your Azure account.
- Exploring AKS features and configurations.
-
Setting Up a GitHub Pipeline for Docker and Kubernetes:
- Introduction to CI/CD and its importance in modern development.
- Configuring your GitHub repository for seamless integration with Azure DevOps.
- Creating a CI/CD pipeline that automates Docker image builds and Kubernetes deployments to your AKS cluster.
By the end of this tutorial, you'll have gained practical insights into the world of CI/CD automation on Azure Kubernetes Service, empowering you to accelerate your software delivery process while maintaining high standards of reliability and efficiency.
So, let's embark on this journey to unlock the potential of automating CI/CD on your Azure Kubernetes Cluster. Ready to get started? Let's dive in!
First things first we have to build our docker image locally. I have set up a simple Typescript-Nodejs server with a few routes: home
, about
, contact
, and a universal 404
. (This can be setup using any framework) Here is a link to my code on GitHub - CODE.
Here is what the routes look like :
I have also set up a basic Docker file configuration. Here is what it looks like :
Now I will simply build a new image using this command :
docker build -t kubernetes-pipeline .
Then ensure the docker image is running using this command :
docker run -d --name kubernetes-pipeline -p 3000:3000 kubernetes-pipeline
This should start the docker container on port 3000
CREATING A REGISTRY
Next, we need to push this image to a registry on Microsoft Azure. This will be a good time to create an Azure account if you don't already have one. You can create an Azure account here: Azure.
Once your account has been created successfully, in the search bar that appears on the Azure dashboard, search for registries and select Create container registry
.
Once you have successfully created your registry, we will proceed to push our image to the registry using this command:
Login to the registry: az acr login --name onlyregistryhere
Tag image to the repository: docker tag kubernetes-pipeline onlyregistryhere.azurecr.io/kubernetes-pipeline:latest
Push the image to Azure registry: docker push onlyregistryhere.azurecr.io/kubernetes-pipeline:latest
Ensure to replace onlyregistryhere
with your registry name and kubernetes-pipeline:latest
with your docker image name and tag.
If everything works as expected, you should see your image name in the list of repositories.
CREATING AN AZURE KUBERNETES CLUSTER
Now let us create our Kubernetes cluster. From the Azure dashboard, simply search Kubernetes services
, follow the prompt, and create a cluster. If everything has been set up correctly, you should see this:
To interact with our cluster and manage services and deployments, I recommend using the Cloud shell. Therefore, locate the cloud shell in the get started
menu and click on connect
.
Once you open the cloud shell, you can now interact with all the pods we will deploy in the future. save this tab and let us head over to Azure DevOps.
Our cloud shell should look like this:
CI WITH AZURE DEVOPS
Now, to automate the CI/CD process, you need to have an Azure DevOps account. If you don't, simply head over to Azure DevOps to create a free account.
Next, click on create a new project
. Once created, locate pipeline
and select Github/Gitlab to create a pipeline with your chosen host. This may prompt you to authorize this action from your Github/Gitlab account. After authorization, select the repository you would like to create a CI pipeline for and select okay
. Next, configure your pipeline from the list of available options. In our case, we will select Docker
to build and push the image to Azure Container Registry, then we will later run a separate pipeline - Deploy to Azure Kubernetes Service
to create a pipeline for your Azure Kubernetes Service. follow the default prompts and grant all the necessary permissions to configure a successful CI pipeline.
If everything checks out, you should see this:
Now to confirm that our deployment works fine, let us head back to our cloud shell and query for all services and deployments using the following commands:
kubectl get deployments
: get all deployments.
kubectl get services
: get all services.
Let me explain what is happening in the cloud shell.
-
kubectl get deployments:
- Deployment name
kubernetespipeline
. - 1 pod is ready and available out of 1.
- Deployment is up-to-date.
- Age: 56 seconds.
- Deployment name
-
kubectl get services:
-
kubernetes
service (core Kubernetes service):- ClusterIP: 10.0.0.1.
- No external IP.
- Type: ClusterIP.
- Age: 129 minutes.
-
kubernetespipeline
service (created service):- ClusterIP: 10.0.25.184.
- External IP: 51.142.173.251.
- Type: LoadBalancer.
- Port Mapping: 3000:31553.
- Age: 60 seconds.
-
-
kubectl get pods:
- Pod name:
kubernetespipeline-5d677f89c8-qvcsp
. - 1/1 containers in the pod are ready.
- Pod status: Running.
- No restarts.
- Age: 2 minutes.
- Pod name:
-
kubectl logs -f kubernetespipeline-5d677f89c8-qvcsp:
- Following logs for pod
kubernetespipeline-5d677f89c8-qvcsp
. - Application:
kubernetes-pipeline@1.0.0
, started withnode ./dist/app.js
. - Server running on port 3000.
- Logs show requests to "Home page!", "Contact page!", and "About page!".
- Following logs for pod
The output provides a snapshot of the Kubernetes deployment, services, pod, and application logs in my cluster. A deployment named kubernetespipeline
has a ready pod, and a service named kubernetespipeline
is externally accessible. The pod, kubernetespipeline-5d677f89c8-qvcsp
, is running an application serving requests on port 3000, with logs indicating various page accesses. The core kubernetes
service and its details are also displayed.
Now let me access this endpoint on my local browser.
Conclusion:
See? You've successfully accessed your containerized API using the external IP of your service within your Kubernetes cluster. By following the steps outlined in this tutorial, you've not only set up a seamless integration between your GitHub repository and your Kubernetes cluster but also automated the process of updating your application. This streamlined approach gives you more time to concentrate on what truly matters โ the development of your application itself.
What's Next?
The journey doesn't stop here. You've established a robust foundation for your CI/CD pipeline, but there are more enhancements and optimizations you can explore:
Custom Domain Setup:
Take your application to the next level by providing a custom domain for your deployment. This way, users can access your API using a memorable and branded URL. You can achieve this by setting up an Ingress controller in Kubernetes and configuring it to route traffic to your service. This enhances user experience and aligns with professional standards.Scale and Load Balancing:
As your application gains popularity and user traffic increases, you can further optimize performance by exploring Kubernetes' scaling and load balancing capabilities. Configure Horizontal Pod Autoscaling to dynamically adjust the number of pods based on traffic load, ensuring smooth user experiences during traffic spikes.Security and Authentication:
Protect your API and user data by implementing security measures. Explore Kubernetes' built-in security features, like Network Policies, to control communication between pods. Additionally, consider integrating authentication and authorization mechanisms to ensure that only authorized users can access your API.Monitoring and Logging:
Gain insights into your application's behavior and performance by setting up monitoring and logging solutions. Tools like Prometheus and Grafana can help you monitor resource usage and visualize metrics, enabling you to proactively address any issues.
As you venture further into the realm of Kubernetes, CI/CD, and application development, remember that your learning journey is ongoing. Embrace new challenges and keep exploring advanced techniques to create more efficient, reliable, and user-friendly applications.
Farewell:
With that, I bid farewell to this tutorial. I hope that this guide has provided you with a solid foundation to automate your CI/CD pipeline on an Azure Kubernetes Service cluster. Remember, technology evolves, and so does your expertise. Keep experimenting, learning, and innovating, and you'll continue to build amazing solutions that make a real impact.
Thank you for joining me on this journey, and best of luck with your future endeavors in the exciting world of DevOps!
Top comments (0)