DEV Community

Cover image for Top 5 Things an Azure Developer Needs to Know: Kubernetes Infrastructure
Pulumi Dev for Pulumi

Posted on • Originally published at pulumi.com

Top 5 Things an Azure Developer Needs to Know: Kubernetes Infrastructure

History lesson time! In 2011, microservices debuted as an architectural style suited for the cloud. In 2013, Docker simplified building containers. Combining containers and microservices sparked a change in how applications were built and distributed in the cloud. As performance, scaling, and reliability became an increasing concern, container orchestration platforms became widely available. Kubernetes became the dominant container orchestration through community and corporate support, and some have suggested it was inevitable. Every major cloud service provider, including Azure, offers a version of Kubernetes.

Kubernetes streamlines container deployment and management, making applications scale and accessible. This article demonstrates configuring and deploying Kubernetes with Azure.

A Kubernetes Review

If you’re not familiar with Kubernetes concepts and terminology, the Getting Started with Kubernetes series can help get you up to speed.

Azure Kubernetes Service

Azure Kubernetes Service (AKS) is a hosted Kubernetes service. Azure manages the Kubernetes master nodes, and you are responsible for managing the agent or worker nodes. You only pay for the worker nodes in your cluster that make up your application.

Kubernetes nodes are the worker machines that can be either physical or virtual. AKS nodes use Azure virtual machines (VMs); and you can add storage, upgrade cluster components, or even run multiple node pools with mixed operating systems.

You can create an AKS cluster with

  • the Azure CLI,
  • the Azure portal,
  • PowerShell, and
  • templates, such as Azure Resource Manager (ARM) templates.

AKS features

Configuring and deploying Kubernetes can be complex. AKS provides many features to simplify the process, including the following features:

Create an AKS Cluster with the Azure Portal

We’ll use the Azure Portal to illustrate the steps to configure and deploy an AKS Cluster.

Step 1: Create a Kubernetes Service.

Open the Azure Portal and select Create Resource.
Select Containers > Kubernetes Service.
Select Create.

Create Kubernetes Service

Step 2: Create an AKS cluster.

For this example, we will configure several options in the Basic window but use defaults for most options.

  • In Subscription, select an Azure Subscription.
  • Create or select a Resource Group.
  • In Cluster details, set the Kubernetes cluster name.
  • Change the Region if needed.

Use the default values for Primary node pool, and select Next: node pools.

Create AKS Cluster

Step 3: Configure node pools.

A node pool is a logical grouping of nodes. Nodes in a pool can have different virtual machines, different Kubernetes versions, and other attributes. You can use different node pools for different purposes, such as grouping workflows, e.g., a node pool for production and one for dev or test.

For this example, use the default values for node pools.

Node Pools

Step 4: Configure authentication.

You can authenticate, authorize, secure, and control access to Kubernetes clusters with

To configure Authentication for this example, set the Authentication method to System-assigned managed identity. You can set Role-based access control (RBAC) to enabled, but it’s unnecessary for the example. Use the default value for the Node pool OS disk encryption type.

Authentication

Step 5: Configure networking

AKS can use either kubenet or Azure CNI networking. Kubenet is the default configuration for AKS cluster creation.

With kubenet, Azure creates and configures the virtual network. However, only the nodes receive a routable IP address and pods use a NAT to communicate with resources outside the AKS cluster. This approach reduces the number of IP addresses you need to reserve in your network space for pods to use.

Azure CNI assigns an IP address to every pod, which makes them directly accessible. The IP addresses must be unique and planned in advance. Each node must be configured for the maximum number of pods to reserve IP addresses per node. Without accounting for the number of pods, the network can run out of IP addresses to allocate or necessitate the need to rebuild clusters with a larger subnet.

For this example, we’ll use kubenet for it’s simplicity, but in production Azure CNI maybe the better choice for applications.

Networking

Step 6: Complete the deployment

Select Review + create to deploy. Deployment takes several minutes to complete. On completion, you can verify the Deployment details or select Connect to cluster.

Deployment Complete

Step 7: Connect to the cluster

You can manage your cluster with kubectl, a command line tool for managing Kubernetes. You can install kubectl on linux, macOS, and Windows.

Copy and paste the commands to connect and authenticate to your cluster using the Azure CLI.

Connect

After connecting and authenticating to your cluster, you can use kubectl to query your cluster.

kubectl get nodes
NAME                                STATUS   ROLES   AGE     VERSION
aks-agentpool-19694923-vmss000000   Ready    agent   9m35s   v1.20.7
aks-agentpool-19694923-vmss000001   Ready    agent   9m39s   v1.20.7
aks-agentpool-19694923-vmss000002   Ready    agent   9m47s   v1.20.7
kubectl get all
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   11m
Enter fullscreen mode Exit fullscreen mode

Your AKS cluster is deployed and running.

Deploying an AKS cluster with code

Now that we have reviewed the process for creating an AKS cluster using the Azure Portal, we can repeat the process using code.

Step 1: Create a Resource group and a Service principal for the cluster. Note that we used a System-assigned managed identity in the Azure Portal example.

Example in Python:

    # Create new resource group
    resource_group = resources.ResourceGroup("azure-native-py-aks")

    # Create an AD service principal
    ad_app = azuread.Application("aks", display_name="aks")
    ad_sp = azuread.ServicePrincipal("aksSp", application_id=ad_app.application_id)

    # Generate random password
    password = random.RandomPassword("password", length=20, special=True)

    # Create the Service Principal Password
    ad_sp_password = azuread.ServicePrincipalPassword("aksSpPassword",
                                                    service_principal_id=ad_sp.id,
                                                    value=password.result,
                                                    end_date="2099-01-01T00:00:00Z")

    # Generate an SSH key
    ssh_key = tls.PrivateKey("ssh-key", algorithm="RSA", rsa_bits=4096)
Enter fullscreen mode Exit fullscreen mode

Step 2: Configure the AKS cluster

We set the configuration options in ManagedClusterAgentPoolProfileArgs:

  • Count: Number of virtual machines
  • MaxPods: Maximum number of pods that can run on a node.
  • Mode: Sets the type of pool node, which can be system or user
  • Name: Name of the pool node
  • OsType: Specifies the OS type
  • Type: Choose between a VirtualMachineScaleSet or an AvailabilitySet
  • VmSize: The virtual machine size used by the cluster. The remaining parameters set the Kubernetes version, enables RBAC, and configures the Linux profile for the ContainerService.

Example in Python:

    # Create cluster
    managed_cluster_name = config.get("managedClusterName")
    if managed_cluster_name is None:
        managed_cluster_name = "azure-native-aks"

    managed_cluster = containerservice.ManagedCluster(
        managed_cluster_name,
        resource_group_name=resource_group.name,
        agent_pool_profiles=[{
            "count": 3,
            "max_pods": 110,
            "mode": "System",
            "name": "agentpool",
            "node_labels": {},
            "os_disk_size_gb": 30,
            "os_type": "Linux",
            "type": "VirtualMachineScaleSets",
            "vm_size": "Standard_DS2_v2",
        }],
        enable_rbac=True,
        kubernetes_version="1.18.14",
        linux_profile={
            "admin_username": "testuser",
            "ssh": {
                "public_keys": [{
                    "key_data": ssh_key.public_key_openssh,
                }],
            },
        },
        dns_prefix=resource_group.name,
        node_resource_group=f"MC_azure-native-go_{managed_cluster_name}_westus",
        service_principal_profile={
            "client_id": ad_app.application_id,
            "secret": ad_sp_password.value
        })
Enter fullscreen mode Exit fullscreen mode

Step 3: Export the kubeconfig file

A kubeconfig file organizes information about clusters and allows kubectl connect to the cluster.

Example in Python:

    # Export kubeconfig
    encoded = creds.kubeconfigs[0].value
    kubeconfig = encoded.apply(
        lambda enc: base64.b64decode(enc).decode())
    pulumi.export("kubeconfig", kubeconfig)
Enter fullscreen mode Exit fullscreen mode

Step 4: Connect to the AKS cluster

We can use the kubeconfig file connect to the cluster. It’s common practice to copy the kubeconfig file to ~/.kube/config, which is the directory that kubectl looks for the file. However, you can use any directory with the --kubeconfig flag, e.g.,

kubectl --kubeconfig /path/to/kubeconfig_file get pods
Enter fullscreen mode Exit fullscreen mode

As this example shows, deploying an AKS cluster requires setting parameters. This action can be done in the portal or with code. The advantage of code is that clusters can be created on demand without having to use the Azure portal. This setup is convenient when you have different environments, such as a dev/test environment and a production environment.

The complete code for deploying AKS is available on GitHub for Typescript, Python, Go, and C#.

Summary

Azure Kubernetes Service lets you deploy a Kubernetes cluster quickly and efficiently. An AKS deployment configures the worker nodes since the master nodes are provided by Azure. The Azure Portal provides a simplified interface for configuration, but if you want fine-grain control over the deployment, infrastructure as code is an option. This is particularly true for production deployments where the configuration requires tuning for an application deployment.

In the next article in this series, we’ll deploy an application on an AKS cluster using different methods.

Top comments (0)