DEV Community

Cover image for Azure DevOps Environments for Kubernetes EXPLAINED
Davide 'CoderDave' Benvegnù
Davide 'CoderDave' Benvegnù

Posted on • Edited on

Azure DevOps Environments for Kubernetes EXPLAINED

Azure Pipelines is a great tool for doing Continuous Integration and Continuous Deployment and thanks to Multi Stage Pipelines we can finally have build, test, and release directly expressed in source code.

Recently they have also introduced the concept of "Environments", which belongs to the release process.

In the previous article of this series we've looked at the Environments in general.

Today, instead, we will go deeper into it and we will explore the integration between environments and Kubernetes clusters.

In fact, the Kubernetes resource is the first that has been made available in Azure DevOps Environments and it's the one which provides the most features.

Let's see first what we can do with it and then how to set this up.

Video

If you are a visual learner or simply prefer to watch and listen instead of reading, here you have the video with the whole explanation, which to be fair is much more complete than this post.

If you rather prefer reading, well... let's just continue :)

Deployments

When you click on the environment, you have 2 separate tabs: Resources and Deployments.

The Deployments tab has the same content we've seen in the previous post about Environments in general, so I won't spend much time about it.

Just real quick, in here you can see what has been deployed and when.

What and where

For example here I can see that this deployment came from the deploy job of the K8S_CICD Pipelines, ran on May 25.

Also, if I go inside, I can see what changes are included (and I have visibility down to the single file diffs):

Changes

and that all of this was initially planned in the linked work items, here for example a bug and a task:

Bug and Task

Thanks to the complete integration of all the parts of Azure Devops, we have full traceability from Work management to deployments, and everything in between.

But as we have seen in the previous article, this comes with any kind of Environment, not only with the Kubernetes ones. So let's see what is unique about this one.

Resources

Back in the Environment page, the Resources tab is where the magic happens.

In fact here I can basically explore the content of my Kubernetes cluster!

Again, we have two tabs here: Workloads and Services.

Workloads lists the Deployments with their Replica Sets:

Workloads

You can drill into the Replica Set to see its details and the Pods it's running on:

Replica Sets

Note that here you can see not only the image associated with this deployment, but also its label and selectors (if any).

Finally, drilling into a pod, you can see its full details:

Pod

But that's not all, you can go even deeper. In fact, we can see the Logs the Pod is producing:

Logs

and even the full YAML of the Pod coming directly from K8S!

YAML

If you are curious about the YAML content, this is what you get:

apiVersion: v1
kind: Pod
metadata:
  name: webserver-7f6cf4c486-4mtcj
  generateName: webserver-7f6cf4c486-
  namespace: default
  selfLink: /api/v1/namespaces/default/pods/webserver-7f6cf4c486-4mtcj
  uid: 536bafd1-9d45-44d3-b674-3e0d91345d1c
  resourceVersion: '899183'
  creationTimestamp: '2020-05-19T06:17:44Z'
  labels:
    app: nginx
    pod-template-hash: 7f6cf4c486
  annotations:
    azure-pipelines/jobName: '"Deploy"'
    azure-pipelines/org: 'https://dev.azure.com/dbtek/'
    azure-pipelines/pipeline: '"K8S_CICD"'
    azure-pipelines/pipelineId: '"65"'
    azure-pipelines/project: AKSEnvironmentDemo
    azure-pipelines/run: '20200525.1'
    azure-pipelines/runuri: 'https://dev.azure.com/dbtek/AKSEnvironmentDemo/_build/results?buildId=1058'
    cni.projectcalico.org/podIP: 10.244.1.9/32
  ownerReferences:
    - apiVersion: apps/v1
      kind: ReplicaSet
      name: webserver-7f6cf4c486
      uid: 1daf0d4e-5daf-4926-a4ee-42c735fb0071
      controller: true
      blockOwnerDeletion: true
spec:
  volumes:
    - name: default-token-mcfjv
      secret:
        secretName: default-token-mcfjv
        defaultMode: 420
  containers:
    - name: nginx
      image: 'nginx:1.17.10'
      ports:
        - containerPort: 80
          protocol: TCP
      resources: {}
      volumeMounts:
        - name: default-token-mcfjv
          readOnly: true
          mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      imagePullPolicy: IfNotPresent
  restartPolicy: Always
  terminationGracePeriodSeconds: 30
  dnsPolicy: ClusterFirst
  serviceAccountName: default
  serviceAccount: default
  nodeName: aks-agentpool-17828392-vmss000002
  securityContext: {}
  schedulerName: default-scheduler
  tolerations:
    - key: node.kubernetes.io/not-ready
      operator: Exists
      effect: NoExecute
      tolerationSeconds: 300
    - key: node.kubernetes.io/unreachable
      operator: Exists
      effect: NoExecute
      tolerationSeconds: 300
  priority: 0
  enableServiceLinks: true
status:
  phase: Running
  conditions:
    - type: Initialized
      status: 'True'
      lastProbeTime: null
      lastTransitionTime: '2020-05-19T06:17:44Z'
    - type: Ready
      status: 'True'
      lastProbeTime: null
      lastTransitionTime: '2020-05-19T06:17:56Z'
    - type: ContainersReady
      status: 'True'
      lastProbeTime: null
      lastTransitionTime: '2020-05-19T06:17:56Z'
    - type: PodScheduled
      status: 'True'
      lastProbeTime: null
      lastTransitionTime: '2020-05-19T06:17:44Z'
  hostIP: 10.240.0.6
  podIP: 10.244.1.9
  podIPs:
    - ip: 10.244.1.9
  startTime: '2020-05-19T06:17:44Z'
  containerStatuses:
    - name: nginx
      state:
        running:
          startedAt: '2020-05-19T06:17:56Z'
      lastState: {}
      ready: true
      restartCount: 0
      image: 'nginx:1.17.10'
      imageID: >-
        docker-pullable://nginx@sha256:30dfa439718a17baafefadf16c5e7c9d0a1cde97b4fd84f63b69e13513be7097
      containerID: >-
        docker://6c8ebc8692033d4a0e92fe247fa4970ed77d664876cf42a7b8f5bad012eb2c68
      started: true
  qosClass: BestEffort
Enter fullscreen mode Exit fullscreen mode

I love it, it's so useful and having everything in one tool allows you to focus much more!

Now that we have explored what we can do with it, let's see how to create a new Environment for Kubernetes in Azure Pipelines.

Create a Kubernetes environment

First of all, needless to say, you need to have a Kubernetes cluster. I always use AKS in Azure because is a managed service and I don't have to pay for the master nodes ;)

And using AKS is also much easier linking the cluster with Azure DevOps.

First of all, let's go to the Environments section under Pipelines, and click "New Environment".

Let's give the environment a name and select "Kubernetes" as type of Resource.

Here you can set whether you want to use AKS or any other Kubernetes cluster. If you go for a Generic Kubernetes, then you'll need to input all the parameters manually and set up your cluster so it is reachable from Azure DevOps.

But if you select AKS, you'll get prompted with the selection directly.

Just pick the Azure Subscription and the cluster, and select the namespace you want to link. In fact, each resource maps to a specific Namespace in your cluster.

If you already have an environment linked to your default namespace, you can create a new one.

For this post I decided to call the new Environment Secondary and to create a new namespaces called app2ns.

This of course works if the account with which you're accessing Azure Devops has the proper permissions in your Azure subscription.

New Environment

And that's it! Of course, since we've just created it, it is completely empty.

Now all we have to do is using this environment in a Deployment job of a pipeline to have all the goodness we've seen before.

Let's use it in a Pipeline

As mentioned, you need to have a pipeline which uses a Deployment Job, not a "normal one".

Then add the environment to it. The format to use is "EnvironmentName dot Namespace". In my case, this is Secondary.app2namespace

This is the snippet:

- stage: CD
  displayName: CD Stage
  dependsOn: CI
  jobs:

  - deployment: deploy
    displayName: Deploy
    environment: Secondary.app2namespace
    strategy: 
      runOnce:
        deploy:
          steps: 

          - task: KubernetesManifest@0
            inputs:
              action: 'deploy'
              manifests: '$(Pipeline.Workspace)/YAML Files/k8s/App2.yaml'
Enter fullscreen mode Exit fullscreen mode

The beauty of this, looking down at the task which actually perform the deployment, is that we do not need to specify any service connection, credentials, or anything else.

The reason for this is that the Pipelines engine will take everything it needs from the Environment directly.

If you then make a change at your code, this will start the whole process of CI CD, and bring together all the componentes: code, work items, build and deployment.

Conclusion

Cool right? I mean, there are cooler things than this but they are all in the outside world... like, the real world... you know what I mean :D

Alright, this is how to use the Azure DevOps Environments for Kubernetes and how amazing and useful it is working this way.

Examples

Take a look at my video on YoutTube here to see how to create, manage, and use the Environments for Kubernetes in Azure DevOps.

Like, share and follow me 🚀 for more content:

📽 YouTube
Buy me a coffee
💖 Patreon
🌐 CoderDave.io Website
👕 Merch
👦🏻 Facebook page
🐱‍💻 GitHub
👲🏻 Twitter
👴🏻 LinkedIn
🔉 Podcast

Buy Me A Coffee

Top comments (3)

Collapse
 
purplepiranha profile image
Paul @ Purple Piranha

Great article. If only I'd found this earlier!

I am struggling with approvals and wondered whether you have them working?

Say I have an environment called 'Development' and within it, a namespace 'myproject-dev'. I've set an approver up for the 'Development' environment.

In my yaml:

- stage: DeployDev
  displayName: 'Deploy: Dev'
  dependsOn: Build
  jobs:
  - deployment: DeployDevJob
    displayName: Deploy to dev job
    pool:
      name: $(poolName)
    environment: Development.myproject-dev
    strategy:
      runOnce:
        deploy:
          steps:
          - task: KubernetesManifest@0
            displayName: Deploy to Kubernetes cluster
            inputs:
              action: deploy
              manifests: $(Pipeline.Workspace)/manifests/deployment.yaml
              containers: mycontainerreg.azurecr.io/myproject/website:$(tag)

When run, it gets to the DeployDev stage and fails.

If I change my yaml to have:

environment: Development

When we get to the DeployDev stage, it waits for approval, but then the KubernetesManifest task fails, as it doesn't have the service connection.

Any thoughts would be really helpful.

Collapse
 
purplepiranha profile image
Paul @ Purple Piranha

I deleted the environments in Azure DevOps and then recreated them, identically to before, and it now works fine. Really strange, as I'd recreated them several times already.

Collapse
 
n3wt0n profile image
Davide 'CoderDave' Benvegnù

Hey. That's an interesting behavior... I didn't experience anything like that.

When you said the pipeline was failing when getting to DeployDev stage, did you have any error message?

I wonder if had something to do with the order you've created the environment/namespaces/approval , maybe there is/was a bug there... I've never experienced it tho