DEV Community

Cover image for Install ArgoCD on RKE2 with Nginx Ingress Controller
Eleni Grosdouli
Eleni Grosdouli

Posted on

Install ArgoCD on RKE2 with Nginx Ingress Controller

When you start working with ArgoCD, and with Kubernetes in general, it is not clear what configuration to use to install ArgoCD on an RKE2 cluster with the Nginx Controller integrated. ArgoCD is a Kubernetes continuous delivery tool based on the GitOps principles. It can be used as a standalone installation or as part of a CI/CD workflow.

The blog post aims to provide readers with a step-by-step approach to install ArgoCD as a standalone installation, create an Ingress Kubernetes resource and access the ArgoCD UI locally.

Lab Setup

- - - - - -+ - - - - - - - - - - - + - - - - - - - - - - -+
| Cluster Name |      Type         |      Version         |
+ - - - - - - -+ - - - - - - - - - - + - - - - - - - - - -+
| cluster04 | Management Cluster   | RKE2 v1.26.11+rke2r1 |
+ - - - - - -+ - - - - - - - - - + - - - - - - - - - - - -+
Enter fullscreen mode Exit fullscreen mode
- - - - - - - + - - - -+
| Deployment | Version |
+ - - - - - - + - - - -+
| ArgoCD     | v2.9.3 |
| Rancher    | v2.7.9 |
+ - - - - - - + - - - -+
Enter fullscreen mode Exit fullscreen mode

Step 1: Install ArgoCD

Going through the official documentation, there are two ways to install ArgoCD on a cluster, either via the Helm chart or via the manifest files. In our case, we will follow the official "Getting Started" guide found here and we will use the manifests approach.

$ kubectl create namespace argocd
$ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
Enter fullscreen mode Exit fullscreen mode

The code above will create the "argocd" Kubernetes namespace and deploy the latest stable manifest. If you would like to install a specific manifest, have a look here.

Validate

Let's validate if all the ArgoCD Kubernetes resource are in a "Running" state.

$ kubectl get all -n argocd
NAME                                                    READY   STATUS    RESTARTS   AGE
pod/argocd-application-controller-0                     1/1     Running   0          15m
pod/argocd-applicationset-controller-5877955b59-2j8fj   1/1     Running   0          15m
pod/argocd-dex-server-6c87968c75-rdnck                  1/1     Running   0          15m
pod/argocd-notifications-controller-64bb8dcf46-6tgnd    1/1     Running   0          15m
pod/argocd-redis-7d8d46cc7f-j5mgj                       1/1     Running   0          15m
pod/argocd-repo-server-665d6b7b59-5qmhs                 1/1     Running   0          15m
pod/argocd-server-7bccc77dd8-v5j2s                      1/1     Running   0          2m52s

NAME                                              TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/argocd-applicationset-controller          ClusterIP      10.43.110.123   <none>        7000/TCP,8080/TCP            15m
service/argocd-dex-server                         ClusterIP      10.43.62.176    <none>        5556/TCP,5557/TCP,5558/TCP   15m
service/argocd-metrics                            ClusterIP      10.43.76.103    <none>        8082/TCP                     15m
service/argocd-notifications-controller-metrics   ClusterIP      10.43.129.111   <none>        9001/TCP                     15m
service/argocd-redis                              ClusterIP      10.43.34.24     <none>        6379/TCP                     15m
service/argocd-repo-server                        ClusterIP      10.43.71.49     <none>        8081/TCP,8084/TCP            15m
service/argocd-server                             LoadBalancer   10.43.4.100     x.x.x.x     80:31274/TCP,443:31258/TCP     15m
service/argocd-server-metrics                     ClusterIP      10.43.219.7     <none>        8083/TCP                     15m

NAME                                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/argocd-applicationset-controller   1/1     1            1           15m
deployment.apps/argocd-dex-server                  1/1     1            1           15m
deployment.apps/argocd-notifications-controller    1/1     1            1           15m
deployment.apps/argocd-redis                       1/1     1            1           15m
deployment.apps/argocd-repo-server                 1/1     1            1           15m
deployment.apps/argocd-server                      1/1     1            1           15m

NAME                                                          DESIRED   CURRENT   READY   AGE
replicaset.apps/argocd-applicationset-controller-5877955b59   1         1         1       15m
replicaset.apps/argocd-dex-server-6c87968c75                  1         1         1       15m
replicaset.apps/argocd-notifications-controller-64bb8dcf46    1         1         1       15m
replicaset.apps/argocd-redis-7d8d46cc7f                       1         1         1       15m
replicaset.apps/argocd-repo-server-665d6b7b59                 1         1         1       15m
replicaset.apps/argocd-server-5986f74c99                      0         0         0       15m
replicaset.apps/argocd-server-7bccc77dd8                      1         1         1       2m52s

NAME                                             READY   AGE
statefulset.apps/argocd-application-controller   1/1     15m
Enter fullscreen mode Exit fullscreen mode

Step 2: Create an Ingress

An Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. An SSL-Pathtrough Ingress example can be found in the official documentation here.

Let's have a look at the most important configurations performed on the example file.

  1 apiVersion: networking.k8s.io/v1
  2 kind: Ingress
  3 metadata:
  4   name: argocd-server-ingress
  5   namespace: argocd
  6   annotations:
  7     nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
  8     nginx.ingress.kubernetes.io/ssl-passthrough: "true"
  9 spec:
 10   ingressClassName: nginx
 11   rules:
 12   - host: argocd-cluster04.{YOUR DOMAIN}
 13     http:
 14       paths:
 15       - path: /
 16         pathType: Prefix
 17         backend:
 18           service:
 19             name: argocd-server
 20             port:
 21               name: http
 22   tls:
 23   - hosts:
 24     - argocd-cluster04.{YOUR DOMAIN}
 25     secretName: argocd-server-tls # as expected by argocd-server
Enter fullscreen mode Exit fullscreen mode

Line 4: The Ingress name is defined as "argocd-server-ingress". The name can be anything you want.

Line 5: The ArgoCD resources in Step 1 were created in the "argocd" namespace. If this is the case for your deployment, keep the Ingress resource in the same namespace.

Line 6–8: The annotations are used as a way to expose the ArgoCD API server as a single ingress rule and hostname.

  • The "nginx.ingress.kubernetes.io/ssl-passthrough" annotation, is used to terminate SSL/TLS traffic at the ArgoCD API server instead of the Nginx Ingress Controller

  • The "nginx.ingress.kubernetes.io/force-ssl-redirect: "true"" annotation tells the Nginx Ingress Controller to automatically redirect HTTP requests to HTTPS

For the second annotation to be functional, we need to add the argument " - enable-ssl-passthrough" to the Nginx Ingress Controller Daemonset. The Daemonset name on an RKE2 installation is "rke2-ingress-nginx-controller".

Update the Nginx Ingress Controller Daemonset

First Option: Edit the Daemonset

$ kubectl edit daemonset rke2-ingress-nginx-controller -n kube-system

  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: rke2-ingress-nginx
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: rke2-ingress-nginx
        app.kubernetes.io/part-of: rke2-ingress-nginx
        app.kubernetes.io/version: 1.9.3
        helm.sh/chart: rke2-ingress-nginx-4.8.200
    spec:
      containers:
      - args:
        - /nginx-ingress-controller
        - --election-id=rke2-ingress-nginx-leader
        - --controller-class=k8s.io/ingress-nginx
        - --ingress-class=nginx
        - --configmap=$(POD_NAMESPACE)/rke2-ingress-nginx-controller
        - --validating-webhook=:8443
        - --validating-webhook-certificate=/usr/local/certificates/cert
        - --validating-webhook-key=/usr/local/certificates/key
        - --watch-ingress-without-class=true
Enter fullscreen mode Exit fullscreen mode

Add the argument " - enable-ssl-passthrough" at the end of the argument list. The output should look like the below.

Output

template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: rke2-ingress-nginx
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: rke2-ingress-nginx
        app.kubernetes.io/part-of: rke2-ingress-nginx
        app.kubernetes.io/version: 1.9.3
        helm.sh/chart: rke2-ingress-nginx-4.8.200
    spec:
      containers:
      - args:
        - /nginx-ingress-controller
        - --election-id=rke2-ingress-nginx-leader
        - --controller-class=k8s.io/ingress-nginx
        - --ingress-class=nginx
        - --configmap=$(POD_NAMESPACE)/rke2-ingress-nginx-controller
        - --validating-webhook=:8443
        - --validating-webhook-certificate=/usr/local/certificates/cert
        - --validating-webhook-key=/usr/local/certificates/key
        - --watch-ingress-without-class=true
        - --enable-ssl-passthrough
Enter fullscreen mode Exit fullscreen mode

Second Option: Patch the Daemonset

$ kubectl patch daemonset rke2-ingress-nginx-controller -n kube-system --type='json' -p '[{"op":"add","path":"/spec/template/spec/containers/0/args/-","value":"--enable-ssl-passthrough"}]'

  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: rke2-ingress-nginx
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: rke2-ingress-nginx
        app.kubernetes.io/part-of: rke2-ingress-nginx
        app.kubernetes.io/version: 1.9.3
        helm.sh/chart: rke2-ingress-nginx-4.8.200
    spec:
      containers:
      - args:
        - /nginx-ingress-controller
        - --election-id=rke2-ingress-nginx-leader
        - --controller-class=k8s.io/ingress-nginx
        - --ingress-class=nginx
        - --configmap=$(POD_NAMESPACE)/rke2-ingress-nginx-controller
        - --validating-webhook=:8443
        - --validating-webhook-certificate=/usr/local/certificates/cert
        - --validating-webhook-key=/usr/local/certificates/key
        - --watch-ingress-without-class=true
        - --enable-ssl-passthrough
Enter fullscreen mode Exit fullscreen mode

Line 12: Set the FQDN of your application. As long as your Domain Name System (DNS) can resolve the name defined, it can be anything. Of course, you will have to own a domain for that purpose.

Line 22–25: The "argocd-server-tls" secret is a new self-signed certificated generated with the use of the OpenSSL utility.

Step 3: Validate the ArgoCD Deployment

In the first two steps, we installed all the needed Kubernetes resources for ArgoCD to be functional and created an Ingress resource to work with the Nginx Ingress Controller to allow ssl-passthrough. Now, it is the big moment, to check if the deployment is actually working.

Validate


$ kubectl get pods,svc,secret -n argocd
NAME                                                    READY   STATUS    RESTARTS   AGE
pod/argocd-application-controller-0                     1/1     Running   0          14m
pod/argocd-applicationset-controller-5877955b59-2j8fj   1/1     Running   0          14m
pod/argocd-dex-server-6c87968c75-rdnck                  1/1     Running   0          14m
pod/argocd-notifications-controller-64bb8dcf46-6tgnd    1/1     Running   0          14m
pod/argocd-redis-7d8d46cc7f-j5mgj                       1/1     Running   0          14m
pod/argocd-repo-server-665d6b7b59-5qmhs                 1/1     Running   0          14m
pod/argocd-server-7bccc77dd8-v5j2s                      1/1     Running   0          103s

NAME                                              TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/argocd-applicationset-controller          ClusterIP      10.43.110.123   <none>        7000/TCP,8080/TCP            14m
service/argocd-dex-server                         ClusterIP      10.43.62.176    <none>        5556/TCP,5557/TCP,5558/TCP   14m
service/argocd-metrics                            ClusterIP      10.43.76.103    <none>        8082/TCP                     14m
service/argocd-notifications-controller-metrics   ClusterIP      10.43.129.111   <none>        9001/TCP                     14m
service/argocd-redis                              ClusterIP      10.43.34.24     <none>        6379/TCP                     14m
service/argocd-repo-server                        ClusterIP      10.43.71.49     <none>        8081/TCP,8084/TCP            14m
service/argocd-server                             LoadBalancer   10.43.4.100     x.x.x.x     80:31274/TCP,443:31258/TCP     14m
service/argocd-server-metrics                     ClusterIP      10.43.219.7     <none>        8083/TCP                     14m

NAME                                 TYPE                DATA   AGE
secret/argocd-initial-admin-secret   Opaque              1      13m
secret/argocd-notifications-secret   Opaque              0      14m
secret/argocd-secret                 Opaque              3      14m
secret/argocd-server-tls             kubernetes.io/tls   2      105s
$ kubectl get ingress -n argocd
NAME                    CLASS   HOSTS                                     ADDRESS                                                                     PORTS     AGE
argocd-server-ingress   nginx   argocd-cluster04.{YOUR DOMAIN}   cluster04-controller-1,cluster04-controller-2,cluster04-worker-1,cluster04-worker-2  80, 443   4m57s
Enter fullscreen mode Exit fullscreen mode

Access the ArgoCD UI

As long as your DNS is correctly set up, you will resolve the FQDN set during the Ingress configuration. Keep in mind, that the ArgoCD deployment is exposed via the HTTPS protocol over port 443.

If you do not control the DNS deployment and you want to perform local testing, for Linux and MacOS-based systems, modify the "/etc/hosts" file and add the IP address of a Kubernetes worker node followed by the FQDN. For Windows-based systems, you can modify the "C:\Windows\System32\Drivers\etc\hosts" file.

URL: https://argocd-cluster04.{YOUR DOMAIN}

Note: If you use a self-sign certificate, your preferred browser will pop up a message about an untrusted connection to a server. In this case and as this is our test environment, you can skip the verification and proceed to the login page of ArgoCD.

ArgoCD UI Login

Top comments (0)