Fargate, a groundbreaking technology, streamlines container orchestration by providing on-demand, perfectly-sized compute capacity. You escape the complexities of manual provisioning, configuring, and scaling of virtual machines. It's the go-to choice when the nature of workloads is uncertain, and rapid deployment is paramount, saving valuable time in capacity planning.
However, leveraging EKS Fargate poses challenges, especially when exposing K8s services as LoadBalancers. Is it worth it? Today, we unravel this mystery and provide a smooth path for its implementation.
Challenges with LoadBalancer Type in EKS Fargate:
In standard EKS, exposing services as LoadBalancers is straightforward. You define your service manifest with type: LoadBalancer, and the magic happens. But in EKS Fargate, you might notice your LoadBalancer stuck in a "pending" status.
Example:
apiVersion: v1
kind: Service
metadata:
name: nlb-sample-service
namespace: test-1
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: LoadBalancer
selector:
app: nginx
Upon applying this in Fargate, you'll witness a perpetually pending external LoadBalancer.
~ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nlb-sample-service LoadBalancer 172.20.39.142 <pending> 80:30843/TCP 3m14s
Checking the service description reveals an "Ensuring LoadBalancer" event.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 3m55s service-controller Ensuring load balancer
How to Make It Run Smoothly?
Steps to Deploy K8s Service as LoadBalancer Type in EKS Fargate:
- deploy the AWS Load Balancer Controller to an Amazon EKS cluster
before you start using any service of type LoadBalancer
you will need to deploy AWS LoadBalancer Controller to your Fargate cluster.
- Download an IAM policy that allows the AWS Load Balancer Controller to make calls to AWS APIs on your behalf, using the following command.
A. For AWS GovCloud (US-East) or AWS GovCloud (US-West) AWS Regions
curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.5.4/docs/install/iam_policy_us-gov.json
B. All other AWS Regions
curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.5.4/docs/install/iam_policy.json
- Create an IAM policy using the policy downloaded in the previous step. If you downloaded iam_policy_us-gov.json, change iam_policy.json to iam_policy_us-gov.json before running the command.
aws iam create-policy \
--policy-name AWSLoadBalancerControllerIAMPolicy \
--policy-document file://iam_policy.json
- Create a service account named aws-load-balancer-controller in the kube-system namespace for the AWS Load Balancer Controller. Use the following command:
eksctl create iamserviceaccount \
--cluster=YOUR_CLUSTER_NAME \
--namespace=kube-system \
--name=aws-load-balancer-controller \
--attach-policy-arn=arn:aws:iam::<AWS_ACCOUNT_ID>:policy/AWSLoadBalancerControllerIAMPolicy \
--override-existing-serviceaccounts \
--approve
The output should be something like the following.
2024-03-06 16:27:17 [ℹ] 1 iamserviceaccount (kube-system/aws-load-balancer-controller) was included (based on the include/exclude rules)
2024-03-06 16:27:17 [!] metadata of serviceaccounts that exist in Kubernetes will be updated, as --override-existing-serviceaccounts was set
2024-03-06 16:27:17 [ℹ] 1 task: {
2 sequential sub-tasks: {
.......
2024-03-06 16:27:50 [ℹ] created serviceaccount "kube-system/aws-load-balancer-controller"
- Install the AWS Load Balancer Controller with Helm using the following command.
helm repo add eks https://aws.github.io/eks-charts
helm upgrade --install aws-load-balancer-controller eks/aws-load-balancer-controller --set clusterName=Your-luster-Name -n kube-system --set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller \
--set region=Your-region \
--set vpcId=Your-VPC
Note here we have to set region
and vpcId
why?
The Amazon EC2 instance metadata service (IMDS) isn't available to Pods that are deployed to Fargate nodes.
So if you didn't specify the region and your VPCId the pod will not able to get them from the metadata.
- Verify that the controller is installed.
$ kubectl get deployment -n kube-system aws-load-balancer-controller
NAME READY UP-TO-DATE AVAILABLE AGE
aws-load-balancer-controller 2/2 2 2 84s
2. Ready to Deploy Your Service:
Now that the controller is in place, reapply your LoadBalancer service.
~ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nlb-sample-service LoadBalancer 172.20.176.78 k8s-test1-nlbsampl-xxxx.onaws.com 80:31406/TCP 95m
Now you can seamlessly use the service of type LoadBalancer in EKS Fargate.
Things to Consider When Exposing Your Service as a LoadBalancer:
Network Load Balancers and Application Load Balancers (ALBs) can be used with Fargate with IP targets only.
Once you deploy the AWS Load Balancer controller in your cluster, it becomes the default class for all your services with type LoadBalancer.
Conclusion:
EKS Fargate offers incredible simplicity and flexibility. With the AWS LoadBalancer Controller, hurdles in exposing K8s services as LoadBalancers are conquered. Seamless integration of this essential feature enriches your container orchestration experience.
For further insights or any questions, connect with me on:
Top comments (0)