DEV Community

Roy Ra for AWS Community Builders

Posted on • Updated on

Deploying simple application to EKS on Fargate.

In this post, I'll describe how to deploy a server application on EKS using Fargate. I'll also go through all the problems I encountered while deploying it. This is my first time using EKS!

Step (1) - Configuring credentials and environment

  • Below are the required tools that we need to successfully deploy EKS in this post.

    • AWS CLI v2: Install following this documentation.
    • kubectl: Install version 1.23.6, since we will be deploying using kubernetes version 1.21
  curl -o kubectl https://s3.us-west-2.amazonaws.com/amazon-eks/1.22.6/2022-03-09/bin/darwin/amd64/kubectl
  cd ~/.kube
  chmod +x ./kubectl
  mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && 
  export PATH=$HOME/bin:$PATH
  echo 'export PATH=$PATH:$HOME/bin' >> ~/.bash_profile
  kubectl version --short --client
Enter fullscreen mode Exit fullscreen mode

After installing all 3 tools above, create an IAM user with programming access enabled to use in your local terminal, and do some tasks. In my case, I created a temporary IAM user and attached AdministratorAccess policy to it.

We will also use KMS, which provides easy ways to configure and manage encryption keys used by various AWS services.

aws kms create-alias --alias-name alias/demo-eks --target-key-id $(aws kms create-key --query KeyMetadata.Arn --output text)
Enter fullscreen mode Exit fullscreen mode
  • We will use the created CMK's ARN later on, so let's store it as an environment variable.
export MASTER_ARN=$(aws kms describe-key --key-id alias/demo-eks --query KeyMetadata.Arn --output text)
echo $MASTER_ARN > master_arn.txt
Enter fullscreen mode Exit fullscreen mode

Step (2) - Configuring VPC

  • Now we're going to create a new VPC, which will be used by EKS cluster that we will create later on.

  • eks-vpc-3az.yaml is a file which will be used by CloudFormation to create resources below.

    • VPC.
    • 3 public subnets.
    • 3 private subnets.
    • 1 internet gateway.
    • 3 NAT gateways.
    • 3 public route tables, and 3 private route tables for each subnet, respectively.
    • Security groups for EKS Cluster.
  • Let's create this VPC using the command below.

  aws cloudformation deploy \
    --stack-name "demo-eks-vpc"
    --template-file "eks-vpc-3az.yaml"
    --capabilities CAPABILITY_NAMED_IAM
Enter fullscreen mode Exit fullscreen mode
  • For about 5 minutes later, we can see that a brand new VPC has been created in AWS Management Console.

  • Navigating to CloudFormation page, after selecting the "demo-eks-vpc" stack, we can see the outputs as below image in the output tab.

CloudFormation stack outputs


(3) Provisioning EKS Cluster using eksctl

Setting environment variables

  • The ID of VPC that we created in step (2) and it's subnet IDs will be used frequently in this step, so lets' run a script as below to use it as environment variables.
#VPC ID
export vpc_ID=$(aws ec2 describe-vpcs --filters Name=tag:Name,Values=demo-eks-vpc | jq -r '.Vpcs[].VpcId')
echo $vpc_ID

#Subnet ID, CIDR, Subnet Name
aws ec2 describe-subnets --filter Name=vpc-id,Values=$vpc_ID | jq -r '.Subnets[]|.SubnetId+" "+.CidrBlock+" "+(.Tags[]|select(.Key=="Name").Value)'
echo $vpc_ID > vpc_subnet.txt
aws ec2 describe-subnets --filter Name=vpc-id,Values=$vpc_ID | jq -r '.Subnets[]|.SubnetId+" "+.CidrBlock+" "+(.Tags[]|select(.Key=="Name").Value)' >> vpc_subnet.txt
cat vpc_subnet.txt

# Store VPC ID, Subnet IDs as environment variables.
export PublicSubnet01=$(aws ec2 describe-subnets --filter Name=vpc-id,Values=$vpc_ID | jq -r '.Subnets[]|.SubnetId+" "+.CidrBlock+" "+(.Tags[]|select(.Key=="Name").Value)' | awk '/demo-eks-vpc-PublicSubnet01/{print $1}')
export PublicSubnet02=$(aws ec2 describe-subnets --filter Name=vpc-id,Values=$vpc_ID | jq -r '.Subnets[]|.SubnetId+" "+.CidrBlock+" "+(.Tags[]|select(.Key=="Name").Value)' | awk '/demo-eks-vpc-PublicSubnet02/{print $1}')
export PublicSubnet03=$(aws ec2 describe-subnets --filter Name=vpc-id,Values=$vpc_ID | jq -r '.Subnets[]|.SubnetId+" "+.CidrBlock+" "+(.Tags[]|select(.Key=="Name").Value)' | awk '/demo-eks-vpc-PublicSubnet03/{print $1}')
export PrivateSubnet01=$(aws ec2 describe-subnets --filter Name=vpc-id,Values=$vpc_ID | jq -r '.Subnets[]|.SubnetId+" "+.CidrBlock+" "+(.Tags[]|select(.Key=="Name").Value)' | awk '/demo-eks-vpc-PrivateSubnet01/{print $1}')
export PrivateSubnet02=$(aws ec2 describe-subnets --filter Name=vpc-id,Values=$vpc_ID | jq -r '.Subnets[]|.SubnetId+" "+.CidrBlock+" "+(.Tags[]|select(.Key=="Name").Value)' | awk '/demo-eks-vpc-PrivateSubnet02/{print $1}')
export PrivateSubnet03=$(aws ec2 describe-subnets --filter Name=vpc-id,Values=$vpc_ID | jq -r '.Subnets[]|.SubnetId+" "+.CidrBlock+" "+(.Tags[]|select(.Key=="Name").Value)' | awk '/demo-eks-vpc-PrivateSubnet03/{print $1}')
echo "export vpc_ID=${vpc_ID}" | tee -a ~/.bash_profile
echo "export PublicSubnet01=${PublicSubnet01}" | tee -a ~/.bash_profile
echo "export PublicSubnet02=${PublicSubnet02}" | tee -a ~/.bash_profile
echo "export PublicSubnet03=${PublicSubnet03}" | tee -a ~/.bash_profile
echo "export PrivateSubnet01=${PrivateSubnet01}" | tee -a ~/.bash_profile
echo "export PrivateSubnet02=${PrivateSubnet02}" | tee -a ~/.bash_profile
echo "export PrivateSubnet03=${PrivateSubnet03}" | tee -a ~/.bash_profile
source ~/.bash_profile
Enter fullscreen mode Exit fullscreen mode
  • After running all the commands above, VPC ID and subnet IDs will be stored in environment variables. Also there will be a file called vpc_subnet.txt, and it will look as below.
vpc-0e88a2ed7a32c0336
subnet-02b5356084f4355cb 10.11.16.0/20 demo-eks-vpc-PublicSubnet02
subnet-0ea280f1567234a3b 10.11.253.0/24 demo-eks-vpc-TGWSubnet03
subnet-0a79d22a3acf610bf 10.11.32.0/20 demo-eks-vpc-PublicSubnet03
subnet-0c96f4c64e724524b 10.11.251.0/24 demo-eks-vpc-TGWSubnet01
subnet-0d5d255e8542cf405 10.11.64.0/20 demo-eks-vpc-PrivateSubnet02
subnet-0c3c37774542ac95c 10.11.252.0/24 demo-eks-vpc-TGWSubnet02
subnet-07f97eaa984b5ced2 10.11.0.0/20 demo-eks-vpc-PublicSubnet01
subnet-0ebb353a91c36ab17 10.11.48.0/20 demo-eks-vpc-PrivateSubnet01
subnet-0e3c3abe52dbe07e8 10.11.80.0/20 demo-eks-vpc-PrivateSubnet03
Enter fullscreen mode Exit fullscreen mode
  • Let's also store AWS region code.
export AWS_REGION=ap-northeast-2 # in my case
Enter fullscreen mode Exit fullscreen mode

Creating EKS Cluster

  • Cluster's name and version of Kubernetes will be used by yaml files, so let's store it as environment variables.
export ekscluster_name="demo-eks"
export eks_version="1.21"
Enter fullscreen mode Exit fullscreen mode
  • To create EKS cluster, we will use eks-cluster-3az.yaml.

  • We will configure nodes to use Fargate instead of EC2, and accordin to this documentation, Fargate needs to be provisioned within private subnets.

  • Let's run the command below, to fill environment variables with actual values in eks-cluster-3az.yaml

cat << EOF > eks-cluster-3az.yaml
# A simple example of ClusterConfig object:
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: ${ekscluster_name}
  region: ${AWS_REGION}
  version: "${eks_version}"

vpc:
  id: ${vpc_ID}
  subnets:
    private:
      PrivateSubnet01:
        az: ${AWS_REGION}a
        id: ${PrivateSubnet01}
      PrivateSubnet02:
        az: ${AWS_REGION}b
        id: ${PrivateSubnet02}
      PrivateSubnet03:
        az: ${AWS_REGION}c
        id: ${PrivateSubnet03}

secretsEncryption:
  keyARN: ${MASTER_ARN}

fargateProfiles:
  - name: demo-dev-fp
    selectors:
      - namespace: demo-dev
      - namespace: kube-system
    subnets:
      - ${PrivateSubnet01}
      - ${PrivateSubnet02}
      - ${PrivateSubnet03}

cloudWatch:
  clusterLogging:
    enableTypes:
      ["api", "audit", "authenticator", "controllerManager", "scheduler"]

EOF
Enter fullscreen mode Exit fullscreen mode

Fargate profile is needed to run resources on Fargate.(fargateProfiles). Any resources that meet criteria defined in fargateProfiles.selectors will run on Fargate.
In the example above, we used demo-dev, the namespace that application-related resources will use, and kube-system.
kube-system will be used to provision additional resources such as aws-load-balancer-controller which we will install later.
If we rule out kube-system namespace, we will get an error as below when installing aws-load-balancer-controller.

ingress 0/2 nodes are available: 2 node(s) had taint {eks.amazonaws.com/compute-type: fargate}, that the pod didn't tolerate.
  • Now lets' run the command below to create cluster!
eksctl create cluster --config-file=eks-cluster-3az.yaml
Enter fullscreen mode Exit fullscreen mode

Connecting to EKS cluster

  • When we run kubectl get svc command, we will get an output like below, which means that EKS cluster is ready!
# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   172.20.0.1   <none>        443/TCP   43m
Enter fullscreen mode Exit fullscreen mode

Value of PORT(S) might be different, but don't worry.


(4) Deploying application!

  • First, let's create a Kubernetes namespace, where we will create resources for our application.

In the above when we created faragte profile, we put namespace: demo-dev in the fargateProfiles.selectors field.

kubectl create ns demo-dev
Enter fullscreen mode Exit fullscreen mode

Installing AWS Load Balancer Controller add-on

  • AWS Load Balancer Controller manages AWS Elastic Load Balancers(ELB) used by Kubernetes cluster. This controller provides:

    • Provision new AWS ALB when Kubernetes Ingress is created.
    • Provision new AWS NLB when Kubernetes LoadBalancer is created.
  • Before installing AWS Load Balancer Controller, we must integrate OIDC provider first.

  • Configuring OIDC ID Provider(IdP) to EKS cluster allows you to use AWS IAM roles for Kubernetes service accounts, and this requires an IAM OIDC provider in the cluster. Let's run the command below to integrate OIDC into the cluster.

eksctl utils associate-iam-oidc-provider --cluster demo-eks --approve
# 2022-08-23 16:04:39 [ℹ]  will create IAM Open ID Connect provider for cluster "demo-eks" in "ap-northeast-2"
# 2022-08-23 16:04:40 [✔]  created IAM Open ID Connect provider for cluster "demo-eks" in "ap-northeast-2"
Enter fullscreen mode Exit fullscreen mode

Now we're ready to install AWS Load Balancer Controller add-on! Install by following steps described in this documentation.

Deploying application

  • In my case, I was using an application which used port number 8080 by default. I didn't want to change default this port number, so I created a simple yaml file(deployment.yaml) for Kubernetes Deployment, and also created a yaml file for Kubernetes Service as below.
apiVersion: v1
kind: Service
metadata:
  name: demo-dev-svc
  namespace: demo-dev
spec:
  selector:
    app: demo
  ports:
   -  protocol: TCP
      port: 80
      targetPort: 8080
  type: NodePort
Enter fullscreen mode Exit fullscreen mode
  • As you can see, type of service is NodePort, and the type should be NodePort when using Fargate on EKS.

  • Also, the service will receive requests from port number 80, and forward the request to pods using pod's port number 8080.

  • Lastly, let's create a Kubernetes Ingress, which will eventually create an AWS ALB with the help of AWS Load Balancer Controller add-on.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demo-dev-ingress
namespace: demo-dev
annotations:
  alb.ingress.kubernetes.io/load-balancer-name: demo-dev-lb
  alb.ingress.kubernetes.io/scheme: internet-facing
  alb.ingress.kubernetes.io/target-type: ip
  alb.ingress.kubernetes.io/subnets: ${PUBLIC_SUBNET_IDS}
  alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}]'
  alb.ingress.kubernetes.io/certificate-arn: ${ACM_CERT_ARN}
  alb.ingress.kubernetes.io/security-groups: ${SECURITY_GROUP_FOR_ALB}
  alb.ingress.kubernetes.io/healthcheck-port: "8080"
  alb.ingress.kubernetes.io/healthcheck-path: /actuator/health
  alb.ingress.kubernetes.io/success-codes: "200"
spec:
ingressClassName: alb
rules:
  - host: alb-url.com
    http:
      paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: demo-dev-svc
              port:
                number: 80
Enter fullscreen mode Exit fullscreen mode
  • In the spec, we will make ALB to forward all request to demo-dev-svc Kubernetes service.

  • You can find about annotations in this document.

  • Note that alb.ingress.kubernetes.io/target-type should be set to ip when using Fargate.

  • Also the ALB provisioned when creating this Ingress should receive traffic from the internet, so it should be placed in public subnets(alb.ingress.kubernetes.io/subnets: ${PUBLIC_SUBNET_IDS}), and it should be internet-facing(alb.ingress.kubernetes.io/scheme: internet-facing).

  • Now when we apply new Ingress to Kubernetes, new ALB will be created with appropriate listener rule configured.

  • But when we view the health check of ALB on target groups, it will fail, saying 504 gateway timeout.
    since default security group attached to Fargate pods do not allow inbound requests on port 8080.

Resolving issue: 504 Gateway Timeout

  • This is because default security group attached to Fargate pods does not allow inbound requests on port 8080.

  • So we need to configure custom security groups to Fargate pods, which is described in this documentation.

  • According to the documentation above, new security group must:

    • Allow inbound request on port 53(TCP) from security group of the EKS cluster.
    • Allow inbound request on port 53(UDP) from security group of the EKS cluster.
  • Let's create a new security group, and apply the below yaml file.

apiVersion: vpcresources.k8s.aws/v1beta1
kind: SecurityGroupPolicy
metadata:
name: dev-sgp
namespace: demo-dev
spec:
podSelector:
  matchLabels:
    app: demo
securityGroups:
  groupIds:
    - ${NEW_CREATED_SECURITY_GROUP_ID}
    - ${EKS_CLUSTER_SECURITY_GROUP_ID}
Enter fullscreen mode Exit fullscreen mode
  • When we apply the yaml file, we must restart all pods since the new security group will not effect already running pods.

  • Now when we check the ALB health check, we can see that health checks on target groups are successfully done, and all targets are healthy.

Wrapping up

  • That's it! In this post, I didn't want to focus on any Kubernetes fundamentals, but wanted to share how I troubleshooted issues that I encountered in my deployment process.

  • Also, there are tons of documents to see when we encounter some issues when deploying Kubernetes resources. I wanted to narrow down these documents to what we really have to see, which is why I embedded so many links to documentations in this post.

Extra

  • When we log in to AWS Management Console with root account and visit EKS service page, we will see a warning as below.
Your current user or role does not have access to Kubernetes objects on this EKS cluster This may be due to the current user or role not having Kubernetes RBAC permissions to describe cluster resources or not having an entry in the cluster’s auth config map.
Enter fullscreen mode Exit fullscreen mode
  • If you want to view your resources on your EKS Cluster, run the command below.
kubectl edit configmap aws-auth -n kube-system
Enter fullscreen mode Exit fullscreen mode
  • And add below fields to the same depth as mapRoles.
mapUsers: |
  - userarn: arn:aws:iam::[account_id]:root
  groups:
  - system:masters
Enter fullscreen mode Exit fullscreen mode
  • After save and quit, you will be able to view all resources within your EKS cluster.

Oldest comments (4)

Collapse
 
wuduhren profile image
wuduhren

Hello I wanted to thank you for this post on the error part "ingress 0/2 nodes are available: 2 node(s) had taint". Helped me to debug a 2 weeks problem for our team, thank you!

Collapse
 
sangwoo profile image
Roy Ra

My pleasure!

Collapse
 
gangadhar7 profile image
Gangadhar_matta

how u resolved that error?

Collapse
 
gangadhar7 profile image
Gangadhar_matta

how u resolved it ?