DEV Community

Nishok Vishnu Ganesan
Nishok Vishnu Ganesan

Posted on

EKS Cluster Autoscaler and Testing

To install the Kubernetes Cluster Autoscaler using the terraform-aws-eks-cluster-autoscaler module from the DNXLabs GitHub repository, you can follow these steps:

  1. Clone the Terraform EKS Cluster Autoscaler Repository: Clone the repository to your local machine:
   git clone https://github.com/DNXLabs/terraform-aws-eks-cluster-autoscaler.git
Enter fullscreen mode Exit fullscreen mode
  1. Navigate to the Repository: Move into the cloned repository directory:
   cd terraform-aws-eks-cluster-autoscaler
Enter fullscreen mode Exit fullscreen mode
  1. Update Variables:
    Edit the variables.tf file to set the required variables. You may need to provide values for variables like cluster_name, region, and others depending on your use case. Update these variables as needed.

  2. Initialize Terraform:
    Initialize Terraform in the repository directory:

   terraform init
Enter fullscreen mode Exit fullscreen mode
  1. Review and Apply: Review the changes that Terraform will make and then apply them:
   terraform apply
Enter fullscreen mode Exit fullscreen mode

Confirm the changes when prompted.

  1. Configure Kubernetes Autoscaler:
    After Terraform applies the changes, you need to configure your Kubernetes cluster to use the autoscaler. This may involve deploying the necessary Kubernetes resources (like Deployment or DaemonSet).

  2. Verify Installation:
    Check the pods running in the kube-system namespace to ensure that the Cluster Autoscaler pods are up and running:

   kubectl get pods -n kube-system | grep cluster-autoscaler
Enter fullscreen mode Exit fullscreen mode
  1. Testing and Monitoring: Test the behavior of the Cluster Autoscaler by deploying workloads that require additional nodes. Monitor the scaling activities and verify that nodes are added or removed as needed.

Please note that the exact steps might vary based on your cluster configuration, Terraform version, and any changes made to the repository since my last update in September 2021. Always consult the repository's documentation and README for the most up-to-date instructions. Additionally, make sure to test any changes or new deployments in a non-production environment before applying them to production.

Stress Testing:
The YAML snippet you provided is a Kubernetes Deployment manifest for deploying an Nginx container with specific resource limits and requests, along with a nodeSelector.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ubuntu-deployment
  labels:
    app: ubuntu
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ubuntu
  template:
    metadata:
      labels:
        app: ubuntu
    spec:
      containers:
      - name: ubuntu
        image: nginx
        resources:
          limits:
            cpu: 4
            memory: 8000Mi
          requests:
            cpu: 4
            memory: 8000Mi
      nodeSelector:
        customLabel: application  # Replace 'customLabel' with a label relevant to your nodes
Enter fullscreen mode Exit fullscreen mode

In the nodeSelector section, replace customLabel with an actual label that you have applied to your nodes. This label should be used to specify on which nodes the pod should be scheduled.

After making this adjustment, you can apply the manifest using kubectl apply -f filename.yaml, assuming you have the Kubernetes CLI (kubectl) installed and configured to access your cluster. Make sure to replace filename.yaml with the actual name of the file containing this manifest.

Top comments (0)