DEV Community

Cover image for Deploying a Microservice on Azure Kubernetes (with Let's Encrypt)
Ian Knighton
Ian Knighton

Posted on

Deploying a Microservice on Azure Kubernetes (with Let's Encrypt)

(I also wrote this on my blog. It would be cool if you checked it out. Even though there's nothing there right now.)

I recently had to struggle through this at work and I wanted to share my documentation in case it can help someone in the future. This is a very, very dry post, but it should cover most of the basics.

The Problem

One of our developers just created a new microservice in golang and we needed to deploy it into a Kubernetes cluster in order to take advantage of the scaling capabilities. The service needs to be accessible through the web and be protected by an SSL cert. At this point, the appplication is able to be run using docker-compose and is completely ready to be deployed.

The Solution

Anywhere you see <something> is an indication that a variable will be used. You'll need to keep track of these throughout the process as many are used repeatedly.

Dependencies

The following tools will need to be installed and functional on your machine.

Process:

Create a Resource Group

To create a resource group:

az group create --name <resourceGroup> --location <location>
Enter fullscreen mode Exit fullscreen mode

Create a Cluster

az aks create \
    --resource-group <resourceGroup> \
    --name <clusterName> \
    --node-count 1 \
    --enable-addons monitoring \
    --generate-ssh-keys
Enter fullscreen mode Exit fullscreen mode

This command will take a few minutes (sometimes ~10) to run and at the end will return a JSON-formatted output with the settings for the cluster. Copy and save this information.

Connect to the Cluster

az aks get-credentials --resource-group <resourceGroup> --name <clusterName>
Enter fullscreen mode Exit fullscreen mode

Once this is complete, verify your connection.

kubectl get nodes
Enter fullscreen mode Exit fullscreen mode

That should come back with an output listing the nodes with a status of Ready

NAME                       STATUS    ROLES     AGE       VERSION
aks-nodepool1-8675309-0    Ready     agent     2d        v1.9.11
Enter fullscreen mode Exit fullscreen mode

Initialize Helm/Tiller

Create a Service Account

Create a file called helm-rbac.yaml in your working directory with the following YAML.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system
Enter fullscreen mode Exit fullscreen mode

Create the account with kubectl.

kubectl apply -f helm-rbac.yaml
Enter fullscreen mode Exit fullscreen mode

Configure Helm

Initialize tiller on the cluster for helm to connect.

helm init --service-account tiller
Enter fullscreen mode Exit fullscreen mode

Create an Ingress Controller

helm install stable/nginx-ingress --namespace kube-system --set controller.replicaCount=2
Enter fullscreen mode Exit fullscreen mode

This process will create an IP address for the cluster, we'll need that going forward.

To find the public IP address:

kubectl get service -l app=nginx-ingress --namespace kube-system
Enter fullscreen mode Exit fullscreen mode

From the output of that command, you will need to EXTENRAL-IP of the LoadBalancer service.

NAME                                              TYPE           CLUSTER-IP    EXTERNAL-IP      PORT(S)                      AGE
invincible-toucan-nginx-ingress-controller        LoadBalancer   10.0.186.72   192.167.15.243   80:30947/TCP,443:32654/TCP   2d
invincible-toucan-nginx-ingress-default-backend   ClusterIP      10.0.173.78   <none>           80/TCP 
Enter fullscreen mode Exit fullscreen mode

Install Cert-Manager

(Quick edit: It appears that there may be an issue with Cert-Manager version 0.6. This documentation was written against version 0.5.2, so I updated this command to specify the version.)

helm install stable/cert-manager \
    --version 0.5.2 \
    --namespace kube-system \
    --set ingressShim.defaultIssuerName=letsencrypt-prod \
    --set ingressShim.defaultIssuerKind=ClusterIssuer
Enter fullscreen mode Exit fullscreen mode

Create a Cluster Issuer

Create a file in your working directory called cluster-issuer.yaml.

apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: <emailAddress>
    privateKeySecretRef:
      name: letsencrypt-prod
    http01: {}
Enter fullscreen mode Exit fullscreen mode

Use kubectl to create the cluster issuer.

kubectl apply -f cluster-issuer.yaml
Enter fullscreen mode Exit fullscreen mode

Create a Certificate Object

Create a file in your working directory called certificate.yaml and add the following information.

apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
  name: tls-secret
spec:
  secretName: tls-secret
  dnsNames:
  - <url>
  acme:
    config:
    - http01:
        ingressClass: nginx
      domains:
      - <url>
  issuerRef:
    name: letsencrypt-prod
    kind: ClusterIssuer
Enter fullscreen mode Exit fullscreen mode

Use kubectl to apply this certificate.

kubectl apply -f certificates.yaml
Enter fullscreen mode Exit fullscreen mode

Create Container Registry

az acr create --resource-group <resourceGroup> --name <registryName> --sku Basic
Enter fullscreen mode Exit fullscreen mode

Verify you are able to login to the registry. The credentials can be found on the "Access Keys" blade in the Azure Portal.

az acr login --name <acrName>
Enter fullscreen mode Exit fullscreen mode

Grant Access from Cluster to Registry

Create a file called grant-access.sh in your working directory with the following information:

#!/bin/bash

AKS_RESOURCE_GROUP=<resourceGroup>
AKS_CLUSTER_NAME=<clusterName>
ACR_RESOURCE_GROUP=<resourceGroup>
ACR_NAME=<registryName>

# Get the id of the service principal configured for AKS
CLIENT_ID=$(az aks show --resource-group $AKS_RESOURCE_GROUP --name $AKS_CLUSTER_NAME --query "servicePrincipalProfile.clientId" --output tsv)

# Get the ACR registry resource id
ACR_ID=$(az acr show --name $ACR_NAME --resource-group $ACR_RESOURCE_GROUP --query "id" --output tsv)

# Create role assignment
az role assignment create --assignee $CLIENT_ID --role acrpull --scope $ACR_ID
Enter fullscreen mode Exit fullscreen mode

Save and Run the script and a connection should be allowed between the two.

Build and Push Docker Image to Registry

Change directories to the repo and build the container.

docker build -t <imageName>:<imageTag> .
Enter fullscreen mode Exit fullscreen mode

Once the build has been validated, tag and push it to the repository.

docker tag <imageName> <acrName>/<imageName>:<imageTag>
docker push <acrName>/<imageName>:<imageTag>
Enter fullscreen mode Exit fullscreen mode

Deploy Images to Kubernetes Cluster

The docker-compose.yml file already exists in the registry, so it only needs to be validated.

docker-compose up
Enter fullscreen mode Exit fullscreen mode

Assuming everything worked, convert to Kubernetes deployments/services using kompose.

kompose -f docker-compose.yml up
Enter fullscreen mode Exit fullscreen mode

Verify deployments and pods were created and are running.

kubectl get deployments

NAME         DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
<service>    1         1         1            1           4h
Enter fullscreen mode Exit fullscreen mode
kubectl get pods

NAME                         READY     STATUS    RESTARTS   AGE
<service>-85b87ddc6-bfm7j    1/1       Running   0          4h
Enter fullscreen mode Exit fullscreen mode

Create an Ingress Route

Create a file called ingress-route.yaml and add the following:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress
  annotations:
    kubernetes.io/ingress.class: nginx
    certmanager.k8s.io/cluster-issuer: letsencrypt-prod
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  tls:
  - hosts:
    - <url>
    secretName: tls-secret
  rules:
  - host: <url>
    http:
      paths:
      - path: /
        backend:
          serviceName: <service-name>
          servicePort: 3000
Enter fullscreen mode Exit fullscreen mode

Apply the change with kubectl.

kubectl apply -f ingress-route.yaml
Enter fullscreen mode Exit fullscreen mode

Testing

Validate everything has worked by navigating in a browser to your URL. In my experience thus far, it can take around 20 minutes for the changes to propagate out through the internet. This may cause some weird behaviors.

One indicator is if you see a pod running (kubectl get pods) for cm-acme-http-solver that doesn't normally show up. That means it's still working on gathering a certificate from LetsEncrypt.

Resources:

Top comments (5)

Collapse
 
gowlar profile image
Stefan Gowlar

When i attempt to apply the cluster issuer i am getting "Internal error occurred: failed calling admission webhook "clusterissuers.admission.certmanager.k8s.io": the server is currently unable to handle the request" any ideas?

Collapse
 
ianknighton profile image
Ian Knighton

I don't know off the top of my head, but it looks like this may be something that's popped up recently with cert-manager dealing with ports. There seems to be a fix mentioned here, but I haven't had a chance to test anything with it.

Collapse
 
gowlar profile image
Stefan Gowlar

I am using AKS and cant find a solution to the above :/

Thread Thread
 
ianknighton profile image
Ian Knighton • Edited

I'm kind of winging it, but if you want to go on a trip with me...

  1. Launch the Kubernetes Web Dashboard: az aks browse --resource-group <resourcegroup> --name <clustername>
  2. On the sidebar, find Discovery and Load Balancing > Services
  3. Click on the service called Kubernetes

I believe this should give you the option to edit the YAML. I THINK you should just need to add the port there. Something similar to:

"ports": [
      {
        "name": "https",
        "protocol": "TCP",
        "port": 443,
        "targetPort": 443
      },
      {
        "name": "webhook",
        "protocol": "TCP",
        "port": 6443,
        "targetPort": 6443
      }
    ]

Maybe? This is just a hunch.

You may also be able to accomplish this through kubectl, but I'm not sure.

I'm working on updating my post to specify which version of Cert-Manager to use, since apparently this is an issue after version 0.6.0

Collapse
 
dineshrathee12 profile image
Dinesh Rathee

LetsEncrypt have revoked around 3 million certs last night due to a bug that they found. Are you impacted by this, Check out ?

DevTo
[+] dev.to/dineshrathee12/letsencrypt-...

GitHub
[+] github.com/dineshrathee12/Let-s-En...

LetsEncryptCommunity
[+] community.letsencrypt.org/t/letsen...