Hello before, I want to write about how to use Amazon EKS on AWS Fargate. Before getting started, I want to share my article about Amazon EKS on Amazon EC2 on this link. But what is AWS Fargate? AWS Fargate is a serverless container by AWS and can used for Amazon ECS and Amazon EKS.
Before creating the EKS cluster, install several configurations like:
- eksctl is a command line for creating an EKS cluster. Install eksctl with the command:
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
eksctl version
- kubectl is a command line for managing Kubernetes worker nodes. Install kubectl with the command:
curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.24.7/2022-10-31/bin/linux/amd64/kubectl
chmod +x ./kubectl
mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin
kubectl version --short --client
- AWS CLI is a command line from AWS. Install AWS CLI with the command:
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
aws --version
After several configurations are installed, now create an EKS cluster with AWS Fargate. NOTE: EKS cluster created uses CloudFormation to create resources — VPC, NAT gateway, and many more.
eksctl create cluster --name eksbudionosan --region us-west-2 --without-nodegroup --version 1.24 --fargate
Click the EKS cluster name that has already been created and click Compute. When creating an EKS cluster with Fargate, also created a Fargate profile.
Fargate profile for run pods in EKS cluster. If you want to create a Fargate profile again, can create a Fargate profile with the command:
eksctl create fargateprofile \
--cluster eksbudionosan \
--name my-fargate-profile \
--namespace my-kubernetes-namespace \
--labels key=value
When creating an EKS cluster, automatically create VPC, subnet, security groups, NAT gateway, and many more. Click VPC for detail.
Name for VPC in EKS cluster — eksctl-youreksclustername-cluster/VPC.
If you want read about what is VPC, you can see the link.
Security groups for EKS — control plane security group, shared node security group, load balancer security group and more.
After the EKS cluster is ready to use, create IAM OIDC (Identity and Access Management OpenID Connect) to use IAM roles for service accounts in the EKS cluster.
eksctl utils associate-iam-oidc-provider --cluster eksbudionosan --region us-west-2 --approve
AWS Load Balancer Controller managed Elastic Load Balancers for an EKS cluster.
- An AWS Application Load Balancer (ALB) used for create a Kubernetes Ingress.
Ingress use HTTP and HTTPS routes from client to services in Kubernetes cluster.
To create Ingress, you must have an ingress controller. For this tutorial, I use AWS Load Balancer Controller for the ingress controller.
- An AWS Network Load Balancer (NLB) used for create a Kubernetes service of type LoadBalancer.
For this tutorial, I use AWS Application Load Balancer (ALB). Create IAM policy for AWS Load Balancer Controller.
curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.4.4/docs/install/iam_policy.json
aws iam create-policy \
--policy-name AWSLoadBalancerControllerIAMPolicy \
--policy-document file://iam_policy.json
Create IAM role for EKS service account named aws-load-balancer-controller in the kube-system namespace.
eksctl create iamserviceaccount \
--cluster=eksbudionosan \
--namespace=kube-system \
--name=aws-load-balancer-controller \
--attach-policy-arn=arn:aws:iam::<YOUR_AWS_ACCOUNT>:policy/AWSLoadBalancerControllerIAMPolicy \
--override-existing-serviceaccounts \
--approve
Installing Helm. Helm is package manager for Kubernetes.
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
Install AWS Load Balancer Controller using Helm that already installed. When installing AWS Load Balancer Controller on AWS Fargate, add set region and set vpcId.
helm repo add eks https://aws.github.io/eks-charts
helm repo update
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=eksbudionosan \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller \
--set region=us-west-2 \
--set vpcId=YOUR_VPC_EKS
After installing AWS Load Balancer Controller, check AWS Load Balancer Controller is already installed or not.
kubectl get deployment -n kube-system aws-load-balancer-controller
Create deployment in the EKS cluster using my Amazon ECR private.
kubectl create deployment eksbudionosan --image=<YOUR_ECR_IMAGE>
For service, I share my service YAML file for creating service EKS cluster in my GitHub:
learnaws/service.yaml at main · budionosan/learnaws (github.com)
Also for ingress, I share my ingress YAML file for creating an ingress EKS cluster in my GitHub:
learnaws/ingress.yaml at main · budionosan/learnaws (github.com)
When creating an ingress, use variable — load balancer name, scheme (internet-facing or internal), target type (ip or instance), and ingress class name — ALB (Application Load Balancer).
After creating the service and ingress YAML file, apply two YAML files to create a load balancer in AWS.
kubectl apply -f service.yaml
kubectl apply -f ingress.yaml
Check Amazon EC2, scroll and click Load Balancer. See the Load Balancer has 2 URL load balancer — classic (deprecated) and application type.
Click the application load balancer name for detail. This load balancer is associated with VPC which was already created when creating the EKS cluster.
Still on Amazon EC2, then check Target Groups. Target groups associated with a load balancer. Click the target groups name for detail.
Back to the load balancer, copy the DNS name to a new tab. My web application is running and ready to use. It means the load balancer is successful.
Thank you very much to you when read this tutorial :)
Top comments (1)
I was getting bad gateway 503 with your service file and ignress file. Instead of selector:app , use app.kubernetes.io/name: