DEV Community

Cover image for AWS EKS Setup with eksctl & Argo CD installation, configuration & deploy app with ArgoCD & Kustomize
M.M.Monirul Islam
M.M.Monirul Islam

Posted on

AWS EKS Setup with eksctl & Argo CD installation, configuration & deploy app with ArgoCD & Kustomize

Setting up a production Kubernetes service on AWS EKS

The easiest way to create an EKS on AWS is to use eksctl. And it is recommended to create a bastion server in AWS and run it there rather than a laptop as the environment to run the eksctl cli.

Create AWS User

First, create an admin user so that it can be used programatically in AWS IAM, and obtain the AWS Access Key ID and AWS Secret Access Key of the user.

Create Bastion Server

Let's create a bastion server in AWS. Even if the instance type is t3.small, it is sufficient.

Install kubectl

Install kubectl of desired kubernetes version



$ curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.21.2/2021-07-05/bin/linux/amd64/kubectl
$ chmod +x ./kubectl
$ mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin
$ echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc
$ kubectl version --short --client


Enter fullscreen mode Exit fullscreen mode

Reference link: https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html

Install aws cli

To use eksctl, We need to set the credential of the AWS user.



$ apt install unzip
$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
$ unzip awscliv2.zip
$ ./aws/install


Enter fullscreen mode Exit fullscreen mode

Reference link to install the aws cli:
https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html

AWS cli configuration settings

Now set configuration in aws cli in the following way-



$ aws configure
AWS Access Key ID [None]: ~~~
AWS Secret Access Key [None]: ~~~
Default region name [None]: ap-southeast-1
Default output format [None]: json


Enter fullscreen mode Exit fullscreen mode

Reference link: https://docs.aws.amazon.com/cli/latest/userguide/getting-started-quickstart.html

Install eksctl cli



$ curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
$ mv /tmp/eksctl /usr/local/bin
$ eksctl version
0.98.0


Enter fullscreen mode Exit fullscreen mode

Reference link: https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html

Create EKS Cluster

Create EKS Cluster with eksctl cli.



$ eksctl create cluster \
--version 1.21 \
--name eks-monirul-cluster \
--vpc-nat-mode HighlyAvailable \
--node-private-networking \
--region ap-southeast-1 \
--node-type t3.medium \
--nodes 2 \
--with-oidc \
--ssh-access \
--ssh-public-key monirul \
--managed


Enter fullscreen mode Exit fullscreen mode

Here,
version: the Kubernetes version to use

vpc-nat-mode: All outbounds of kubernetes go out through the nat gateway. The default option is single, so only one is created. In a development environment, it may not matter, but in production, we must use the HighAvailable option to create one for each subnet.

node-private-networking: If this option is not present, a node group is created in the public subnet. Use this option so that it is created in a private subnet for security.

node-type: the instance type of the node to be created

nodes: the number of nodes to be created

Check if the cluster has been successfully created



$ kubectl get nodes -o wide


Enter fullscreen mode Exit fullscreen mode

EKS security related settings

Up to this point, a Kubernets cluster is created in a secure way considering HA for commercial use. But for security, we need to do one more thing. The current state is that the cluster endpoint is allowed to be public.
https://aws.amazon.com/blogs/containers/de-mystifying-cluster-networking-for-amazon-eks-worker-nodes/
https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html

To restrict this, allow private access and modify kubernetes to issue commands only to the bastion server by limiting the CIDR block. Enter the public IPv4 address of the bastion server.



$ eksctl utils update-cluster-endpoints --cluster=eks-monirul-cluster --private-access=true --public-access=true --approve
$ eksctl utils set-public-access-cidrs --cluster=eks-monirul-cluster 1.1.1.1/32 --approve


Enter fullscreen mode Exit fullscreen mode

1.1.1.1/32 is the address of the bastion server.

In fact, EKS Cluster creation can be created at once by creating a yaml file with the --dry-run option of eksctl and giving all options from creation to EKS security-related settings.

Now, if we install Istio in the EKS Cluster created in this way and deploy the service, we can use it immediately.

Argo CD installation, setup & deploy a simple app with kustomize

ArgoCD monitors changes in Kubernetes manifests managed by GitOps, and plays a role in maintaining the form deployed in the actual cluster in the same way.

Argo CD installation

https://argo-cd.readthedocs.io/en/stable/getting_started/
Install Argo CD on kubernetes cluster

In production, install the HA version of the Argo CD.



$ kubectl create namespace argocd
$ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/ha/install.yaml


Enter fullscreen mode Exit fullscreen mode

Install Argo CD CLI

Install Argo CD CLI



$ curl -sSL -o /usr/local/bin/argocd https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64
$ chmod +x /usr/local/bin/argocd


Enter fullscreen mode Exit fullscreen mode

Argo CD service exposure

Argo CD does not expose the server to the outside by default. Change the service type to LoadBalancer as shown below and expose it to the outside.



$ kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'


Enter fullscreen mode Exit fullscreen mode

Change admin password

Argo CD stores the initial password of the initial admin account as the secret of kubernetes. Get the password as below.



$ kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo
jaIyQ3MMuLnl6h0l



Enter fullscreen mode Exit fullscreen mode

Log in to Argo CD using the Argo CD CLI. First, get the address of the created Load Balancer.



$ kubectl get svc argocd-server -n argocd


Enter fullscreen mode Exit fullscreen mode

And log in. Username is admin



$ argocd login <ARGOCD_SERVER_DOMAIN/URL>


Enter fullscreen mode Exit fullscreen mode

Update the password of the admin user after first login.



$ argocd account update-password


Enter fullscreen mode Exit fullscreen mode

Argo CD polls the Git repository once every 3 minutes to check the difference from the actual kubernetes cluster. Therefore, if we are unlucky during distribution, we have to wait up to 3 minutes before Argo CD distributes the changed image. If we want to eliminate the delay caused by polling like this, we can create a webhook with Argo CD in the Git repository. Here is the link:
https://argo-cd.readthedocs.io/en/stable/operator-manual/webhook/

And, usually Argo CD has only a specific IP inbound in the security group to prevent access except for internal developers or operators. At this time, if we have created a webhook as above, we must also open Github's webhook-related API inbound to the Argo CD's load balancer.

The link below contains information about the IP address of GitHub,
https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/about-githubs-ip-addresses
We can check the IP that needs to be put in the actual inbound below. Just put the list in hooks.
https://api.github.com/meta

If we configure to send notification to Slack using Argo CD Notification, the development team can receive a notification when the deployment is not successful.

There is also a way to set up the project by using the ApplicationSet of the Argo CD. If we used App of Apps in the past, a more advanced concept here is Application Sets.

https://opengitops.dev/
https://github.com/open-gitops/documents

Currently, kustomize is used for configuration management of Kubernetes, and kustomize is deployed from Argo CD. If we are using branching in kustomize or helm, it might be helpful to read Stop Using Branches for Deploying to Different GitOps Environments.

Kustomize yaml-

kustomization.yaml



apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namePrefix: kustomize-monirul-

resources:
- nginx-deployment.yaml
- nginx-svc.yaml


Enter fullscreen mode Exit fullscreen mode

Deployment definition file: nginx-deployment.yaml



apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx


Enter fullscreen mode Exit fullscreen mode

Service definition file: nginx-svc.yaml



apiVersion: v1
kind: Service
metadata:
labels:
app: nginx
name: nginx
spec:
ports:

  • port: 80 protocol: TCP targetPort: 80 selector: app: nginx type: ClusterIP
Enter fullscreen mode Exit fullscreen mode




ArgoCD configuration to deploy simple app:

Image description

Screenshot of Argo CD:

Image description

Top comments (0)