Overview
SonarQube is an automatic code review tool to detect bugs, vulnerabilities, and code smells in your code. It can integrate with your existing workflow to enable continuous code inspection across your project branches and pull requests.
This guide walks through how to install SonarQube and run it in Kubernetes. In this specific example we are going to be using Azure Kubernetes Service (AKS).
Deploying the Infrastructure
For the purposes of this tutorial and ease of deployment we will be using the Azure CLI to spin up the AKS cluster in Azure. This can be fully automated using Terraform for enterprise scenarios.
The following resource will deployed
- AKS Cluster with 1 node pool running 2 Linux nodes
- Azure Resource Group
Create the Resource Group
az group create --name sonarqube --location northeurope
Create the AKS Cluster
az aks create --resource-group sonarqube --name sonarqube --node-count 2 --enable-addons monitoring --generate-ssh-keys
This command will deploy a basic cluster using Kubernet networking with no virtual network integration and any advanced features like managed identities etc. Not recommended for enterprise deployments.
Install Kubectl
If you haven't already got it then you should install the Kubectl command line locally in order to administer the cluster:
az aks install-cli
Make sure you have the Azure CLI installed before running this.
Next you should authenticate to the cluster which will store the Kubernetes configuration file locally on your machine:
az aks get-credentials --resource-group sonarqube --name sonarqube --admin
Now that we have the basic infrastructure configured we can proceed with the SonarQube installation.
Installing SonarQube using Helm 3
This guide is valid for installing the SonarQube developer edition however the same method can be followed for community and enterprise.
We will be using Helm to perform the installation. Helm is a package manager for Kubernetes and makes the deployment much faster. We will be using the Helm chart provided by SonarQube which can be found on Github
Install Helm
In order to deploy the chart we first need to install Helm on our local developer machine. The below command will install Helm on Windows using Chocolatey (Windows Package Manager).
choco install kubernetes-helm
If you want to install helm on Mac or another distribution then you can reference the official Helm installation guide.
Apply Node Taints
SonarQube recommend binding the application to a specific node and reserving this node only for SonarQube. It greatly increases the stability of the service.
In order to achieve this we need to apply a node taint which essentially tells Kubernetes not to schedule any pods on a given node.
Firstly we need to get the nodes allocated to our node pool by running:
kubectl get nodes
Next we will apply the taint to one of the selected nodes
kubectl taint nodes aks-nodepool1-22992764-vmss000001 sonarqube=true:NoSchedule
This ensures no pods will be scheduled on this node.
In the next steps we need to modify the values file for the helm deployment. The values.yaml allows us to provide our own customization and pass them to Helm during runtime. The values.yaml is provided by SonarQube and can be found here.
Create a copy of this file in your local directory. To let the SonarQube deployment ignore the previously created taint, uncomment or add the following section of the values.yaml:
tolerations:
- key: "sonarqube"
operator: "Equal"
value: "true"
effect: "NoSchedule
"
Apply Node Labels
With one node now reserved for SonarQube, we need to label this node to be selected by the Kube-scheduler in the pod assignment.
Label the node that we previously applied a taint to:
kubectl label node aks-nodepool1-22992764-vmss000001 sonarqube=true
To only let SonarQube be scheduled on nodes with this specific label, uncomment or add the following section to the values.yaml:
nodeSelector:
sonarqube: "true"
By combining node selection with taints and tolerations, SonarQube can run alone on one specific node independently from the rest of your software in your Kubernetes cluster. This results in better stability and performance of SonarQube.
Change the service to type Load Balancer
In order to access the application publicly we need to change the service deployed in Kubernetes to type LoadBalancer which will essentially expose the service externally by creating an Azure load balancer with a public IP. This is fine for testing however for enterprise scenarious you should run an internal service and exposes it through an application gateway ingress controller or Ngninx ingress.
Modify the SonarQube service to type load balancer in the values.yaml file:
service:
type: LoadBalancer
externalPort: 9000
internalPort: 9000
labels:
annotations: {}
Installing the Helm Chart
SonarQube have provided a good set of defaults for the Helm chart however there are a tone of customizations that can be made by modifying the values.yaml file. For example you can configure persistent volumes, ingress controllers and you can bring your own SQL DB. For this scenario we are going to run PostgreSQL in AKS which is not recommended for enterprise deployments as the nature of databases means they are stateful.
helm repo add sonarqube https://SonarSource.github.io/helm-chart-sonarqube
helm repo update
kubectl create namespace sonarqube
helm upgrade -f .\values.yaml --install -n sonarqube sonarqube sonarqube/sonarqube
The above commands will create the sonarqube namespace in the Kubernetes cluster and install SonarQube using the Helm chart.
Verify the pods are up and running by running the following command:
Verifying the deployment
kubectl get pods -n sonarqube
Initially the pods may appear as pending as pulling the docker images can take some time. They should be ready within 5 minutes.
Once the pods are up and running we need to get the public IP of the service.
kubectl get services -n sonarqube
You should then see the external IP of the SonarQube service. Open up a browser and navigate to the external IP on port 9000 e.g. http://20.107.134.0:9000/
We now have SonarQube running in AKS! You can login by using the default credentials:
- Username = admin
- Password = admin
You can modify these in the values.yaml file. You will be prompted to change the default password upon the first login.
Please note that it is not recommended to expose the service directly for real world scenarios as this is not secure and should only be used for testing.
The next installement will go through how to configure the ingress controller for SonarQube by using an application gateway WAF and bringing your own database.
Top comments (0)