DEV Community

Cover image for Deploying an application to Amazon Elastic Kubernetes Service
Rishav Paul
Rishav Paul

Posted on

Deploying an application to Amazon Elastic Kubernetes Service

So you woke up one morning and decided to learn Kubernetes. You dabbled a bit and discovered that it requires a lot of theoretical knowledge to get started, but you’re more of a doer.

I'm on the same boat as you. And unlike other blogs or tutorials where you see the finished product working perfectly, these blogs are intended to document my real life journey building micro-services on EKS.

Through the series of blog posts, I plan on building a personal finance app. We'll see how far we go. I've named the app Finansy, which is Russian for Finance.

Note on using AWS

Make sure to enable MFA on your account and setup Billing alerts on AWS. This is a good Youtube video about Billing.

Create an Kubernetes Cluster

We will use eksctl for managing our EKS clusters. This is the most popular interface I could find.

eksctl is a simple CLI tool for creating and managing clusters on EKS - Amazon's managed Kubernetes service for EC2. It is written in Go, uses CloudFormation, was created by Weaveworks and it welcomes contributions from the community.

Note that because eksctl uses CloudFormation, I'd highly recommend to NOT use the AWS console to modify the resources.

eksctl create cluster \
  --name finansy \
  --version 1.30 \
  --nodes 1 \
  --node-type t2.micro \
  --region us-west-2
Enter fullscreen mode Exit fullscreen mode

Requires 15 mins for cluster to initialize. Note that t2.micro is free tier eligible. My goal is to keep costs down as much as possible.

We will discover later that t2.micro is not the right instance type. Use c5.large to avoid the extra work

Configuration to connect to the EKS Cluster from the terminal

Now that the cluster is created, we'd want to connect to that from the CLI.

aws eks update-kubeconfig --region us-west-2 --name finansy
Enter fullscreen mode Exit fullscreen mode

This will enable us to use kubectl interact with our Kubernetes cluster's control plane.

You might be wondering why we'd need another CLI when we had eksctl already. eksctl is an AWS utility to manager your EKS cluster, but you can't configure K8S using eksctl. We need kubectl for that.

Note: Commands to get the current kubectl context in case you are connected to clusters locally (minikube) and remotely (EKS).

kubectl config current-context
kubectl config get-contexts
kubectl config use-context <your-eks-context-name>
Enter fullscreen mode Exit fullscreen mode

Create a Java service

We'll deploy a Java based API on Spring Boot to EKS. Here is a blog with the process in detail. Make sure you can access an API on port 8080.

I will share details about the web service in a separate blog post soon.

Generating a JAR

and copying it to a infra related project directory

Modify the build tag to your pom.xml file asking maven to generate a jar file. Note that the copy configuration helps put the jar in a different folder for managing k8s infra, but that's purely optional.

  <build>
    <plugins>
      <plugin>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-maven-plugin</artifactId>
         <executions>
            <execution>
              <goals>
                  <goal>repackage</goal>
              </goals>
            </execution>
          </executions>
      </plugin>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-antrun-plugin</artifactId>
        <version>1.8</version>
        <executions>
            <execution>
                <phase>package</phase>
                <goals>
                    <goal>run</goal>
                </goals>
                <configuration>
                    <target>
                        <copy file="${project.build.directory}/${project.artifactId}.jar"
                              tofile="app/${project.artifactId}.jar"/>
                    </target>
                </configuration>
            </execution>
        </executions>
      </plugin>
    </plugins>

    <finalName>${project.artifactId}</finalName>
  </build>
Enter fullscreen mode Exit fullscreen mode

Create a Dockerfile

Filename: Dockerfile

# Use an official OpenJDK 21 runtime as a parent image
FROM openjdk:21

# Set the working directory in the container
WORKDIR /app

# Copy the executable JAR file to the container
COPY app/portfolio-service.jar /app/portfolio-service.jar

CMD ["java", "-jar", "/app/portfolio-service.jar"]
Enter fullscreen mode Exit fullscreen mode

Note that we are simply generating a docker image using the jar generated by maven. You may want want to run this container and make sure that the service works as expected.

docker build -t finansy/portfolio-service:1.1 .
docker run -d -p 8080:8080 --name portfolio-service finansy/portfolio-service:1.1
Enter fullscreen mode Exit fullscreen mode

curl http://localhost:8080/<path> and ensure you get an expected response.

Upload Image to Amazon ECR

What is Amazon Elastic Container Register?

Push container images to Amazon ECR without installing or scaling infrastructure, and pull images using any management tool.

We would like to push our image to ECR to enable EKS to refer to it.

  • Go to the Amazon ECR console
  • Create a repository
  • Click the View push commands button in the repository details
  • Follow the instructions to authenticate Docker with the ECR registry
  • Push the Docker image
  • Copy the docker image uri, and update the image path below in k8s.yaml.

Kubernetes Concepts in Brief

Here’s a brief overview of Kubernetes concepts that will be useful for rest of the post.

1. Kubernetes Pod

A Pod is the smallest and simplest Kubernetes object. It represents a single instance of a running process in your cluster. A Pod can encapsulate one or more containers (e.g., Docker containers) that share the same network namespace, IP address, and storage volumes. Containers in a Pod are typically tightly coupled and need to share resources, such as storage or networking.

Key Characteristics:

  • Single IP Address: Each Pod has a unique IP address.
  • Shared Storage: Containers in the same Pod share storage volumes.
  • Lifecycle: Pods are designed to be ephemeral. They can be created and destroyed dynamically based on the needs of the application.

2. Kubernetes Deployment

A Deployment is a higher-level abstraction that manages the deployment and scaling of Pods. It ensures that a specified number of Pods are running and updates them in a controlled manner.

Key Functions:

  • Scaling: Easily scale the number of Pods up or down.
  • Rolling Updates: Perform rolling updates to update Pods without downtime.
  • Rollback: Revert to a previous version if an update fails.

Typical Use: You use Deployments to manage stateless applications and ensure that the desired state of the application is maintained.

3. Kubernetes Service

A Service is an abstraction that defines a logical set of Pods and a policy by which to access them. It provides a stable endpoint (IP address and DNS name) for Pods, which can change over time.

Key Functions:

  • Service Discovery: Provides a stable DNS name for a set of Pods.
  • Load Balancing: Distributes incoming traffic among the Pods that are part of the Service.
  • Port Forwarding: Maps a port on the Service to a port on the Pods.

Types:

  • ClusterIP: Exposes the Service on a cluster-internal IP. Only reachable within the cluster.
  • NodePort: Exposes the Service on each node’s IP at a static port. Allows external access.
  • LoadBalancer: Provisioned by a cloud provider to expose the Service externally using a cloud-based load balancer.
  • Headless Service: Does not allocate a cluster IP. Useful for stateful applications.

4. Kubernetes Load Balancer

A Load Balancer is not a Kubernetes resource itself but is often used in conjunction with Kubernetes Services. When you create a Service of type LoadBalancer, Kubernetes provisions a cloud-based load balancer (if supported by your cloud provider) to manage external traffic.

Key Functions:

  • External Access: Provides a single point of access to the Services from outside the Kubernetes cluster.
  • Traffic Distribution: Distributes incoming traffic across multiple Pods to balance the load.

Relationship Between Load Balancer and Service

  • Load Balancer: Is a resource provided by cloud providers (e.g., AWS ELB, GCP Load Balancer) that handles incoming traffic from outside the cluster and distributes it to the appropriate endpoints.
  • Service (Type: LoadBalancer): When you create a Kubernetes Service of type LoadBalancer, Kubernetes interacts with the cloud provider to create and configure the load balancer. The Service then uses this load balancer to route external traffic to the Pods that are part of the Service.

In summary, while the Service provides a stable internal interface and may manage traffic within the cluster, the Load Balancer (when used with a LoadBalancer Service type) provides an external interface and handles the distribution of traffic from outside the cluster to the appropriate Pods.

Create K8S configuration

Create a new k8s.yaml file.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: portfolio-service-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: portfolio-service
  template:
    metadata:
      labels:
        app: portfolio-service
    spec:
      containers:
        - name: portfolio-service
          image: <aws-account-id>.dkr.ecr.us-west-2.amazonaws.com/finansy/portfolio-service:latest
          ports:
            - containerPort: 8080

---
apiVersion: v1
kind: Service
metadata:
  name: portfolio-service
spec:
  selector:
    app: portfolio-service
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
  type: LoadBalancer
Enter fullscreen mode Exit fullscreen mode

This is a good time to catch up on Deployment and LoadBalancer Kuberntes component types.

Update cluster with our service and deployment configuration.

➜  portfolio-service kubectl apply -f k8s.yaml 
deployment.apps/portfolio-service-deployment created
service/portfolio-service created
Enter fullscreen mode Exit fullscreen mode

Get Service Endpoint

➜  portfolio-service kubectl get svc
NAME                TYPE           CLUSTER-IP      EXTERNAL-IP                                                               PORT(S)        AGE
kubernetes          ClusterIP      10.100.0.1      <none>                                                                    443/TCP        26m
portfolio-service   LoadBalancer   10.100.177.55   a39b306d52a8b44fbbedcf670da443f6-1356185046.us-west-2.elb.amazonaws.com   80:31295/TCP   4m14s

Enter fullscreen mode Exit fullscreen mode

Get Pod Status

➜  portfolio-service kubectl get pods
NAME                                            READY   STATUS    RESTARTS   AGE
portfolio-service-deployment-6b6b4b6c4c-6dm8w   0/1     Pending   0          5m6s
portfolio-service-deployment-6b6b4b6c4c-jtpnq   0/1     Pending   0          5m6s
portfolio-service-deployment-6b6b4b6c4c-r9xps   0/1     Pending   0          5m6s
Enter fullscreen mode Exit fullscreen mode

They should be in the Running state. Let's look at our EKS page in AWS.

Why are pods in Pending state

Seems like we exceeded the max IP limit per node.

Found this interesting piece of information on Stack Overflow.

The formula for defining the maximum number of Pods per EC2 Node instance is as follows:

N * (M-1) + 2

Where:

N is the number of Elastic Network Interfaces (ENI) of the instance type

M is the number of IP addresses per ENI

So for the instance you used which is t3.micro the number of pods that can be deployed are:

2 * (2-1) + 2 = 4 Pods, the 4 pods capacity is already used by pods in kube-system namespace

Here you can find the calculated max number of pods for each instance type
https://github.com/aws/amazon-vpc-cni-k8s/blob/master/misc/eni-max-pods.txt
Enter fullscreen mode Exit fullscreen mode

Identifying the right instance type for our use case

We probably want to be able to run 15-20 pods per instance.

Given these options, c5.large and m5.large are suitable as they provide enough IPs to support 15-20 pods and are relatively affordable. Between these two, the c5.large instance is slightly cheaper.

c5.large: 29 IPs, approximately $0.085 per hour

We'll have to go ahead and update the nodes in our cluster.

Creating a new node group in the cluster

➜  portfolio-service eksctl create nodegroup \
  --cluster finansy \
  --name c5LargeNg \
  --node-type c5.large \
  --nodes 1 \
  --nodes-min 1 \
  --nodes-max 1


2024-09-02 12:02:22 [ℹ]  will use version 1.30 for new nodegroup(s) based on control plane version
2024-09-02 12:02:24 [ℹ]  nodegroup "c5LargeNg" will use "" [AmazonLinux2/1.30]
2024-09-02 12:02:24 [ℹ]  2 existing nodegroup(s) (new-node-group,ng-9e92b41d) will be excluded
2024-09-02 12:02:24 [ℹ]  1 nodegroup (c5LargeNg) was included (based on the include/exclude rules)
2024-09-02 12:02:24 [ℹ]  will create a CloudFormation stack for each of 1 managed nodegroups in cluster "finansy"
2024-09-02 12:02:24 [ℹ]
2 sequential tasks: { fix cluster compatibility, 1 task: { 1 task: { create managed nodegroup "c5LargeNg" } }
}
2024-09-02 12:02:24 [ℹ]  checking cluster stack for missing resources
2024-09-02 12:02:25 [ℹ]  cluster stack has all required resources
2024-09-02 12:02:25 [ℹ]  building managed nodegroup stack "eksctl-finansy-nodegroup-c5LargeNg"
2024-09-02 12:02:25 [ℹ]  deploying stack "eksctl-finansy-nodegroup-c5LargeNg"
2024-09-02 12:02:25 [ℹ]  waiting for CloudFormation stack "eksctl-finansy-nodegroup-c5LargeNg"
2024-09-02 12:02:55 [ℹ]  waiting for CloudFormation stack "eksctl-finansy-nodegroup-c5LargeNg"
2024-09-02 12:03:52 [ℹ]  waiting for CloudFormation stack "eksctl-finansy-nodegroup-c5LargeNg"
2024-09-02 12:05:17 [ℹ]  waiting for CloudFormation stack "eksctl-finansy-nodegroup-c5LargeNg"
2024-09-02 12:05:17 [ℹ]  no tasks
2024-09-02 12:05:17 [✔]  created 0 nodegroup(s) in cluster "finansy"
2024-09-02 12:05:18 [ℹ]  nodegroup "c5LargeNg" has 1 node(s)
2024-09-02 12:05:18 [ℹ]  node "ip-192-168-3-155.us-west-2.compute.internal" is ready
2024-09-02 12:05:18 [ℹ]  waiting for at least 1 node(s) to become ready in "c5LargeNg"
2024-09-02 12:05:18 [ℹ]  nodegroup "c5LargeNg" has 1 node(s)
2024-09-02 12:05:18 [ℹ]  node "ip-192-168-3-155.us-west-2.compute.internal" is ready
2024-09-02 12:05:18 [✔]  created 1 managed nodegroup(s) in cluster "finansy"
2024-09-02 12:05:18 [ℹ]  checking security group configuration for all nodegroups
2024-09-02 12:05:18 [ℹ]  all nodegroups have up-to-date cloudformation templates
Enter fullscreen mode Exit fullscreen mode

Delete the old node group

➜  ~ eksctl delete nodegroup --cluster finansy --name ng-9e92b41d
2024-09-02 12:06:03 [ℹ]  1 nodegroup (ng-9e92b41d) was included (based on the include/exclude rules)
2024-09-02 12:06:03 [ℹ]  will drain 1 nodegroup(s) in cluster "finansy"
2024-09-02 12:06:03 [ℹ]  starting parallel draining, max in-flight of 1
2024-09-02 12:06:03 [ℹ]  cordon node "ip-192-168-88-252.us-west-2.compute.internal"
2024-09-02 12:06:30 [✔]  drained all nodes: [ip-192-168-88-252.us-west-2.compute.internal]
2024-09-02 12:06:30 [✖]  failed to acquire semaphore while waiting for all routines to finish: context canceled
2024-09-02 12:06:30 [ℹ]  will delete 1 nodegroups from cluster "finansy"
2024-09-02 12:06:30 [ℹ]  1 task: { 1 task: { delete nodegroup "ng-9e92b41d" [async] } }
2024-09-02 12:06:31 [ℹ]  will delete stack "eksctl-finansy-nodegroup-ng-9e92b41d"
2024-09-02 12:06:31 [✔]  deleted 1 nodegroup(s) from cluster "finansy"
Enter fullscreen mode Exit fullscreen mode

Check the Pod logs

➜  ~ kubectl logs portfolio-service-deployment-6b6b4b6c4c-cslnw

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/

 :: Spring Boot ::                (v3.3.2)

2024-09-02T19:08:16.805Z  INFO 1 --- [Finansy Portfolio Service] [           main] f.p.s.FinansyPortfolioServiceApplication : Starting FinansyPortfolioServiceApplication using Java 21 with PID 1 (/app/portfolio-service.jar started by root in /app)
2024-09-02T19:08:16.828Z  INFO 1 --- [Finansy Portfolio Service] [           main] f.p.s.FinansyPortfolioServiceApplication : No active profile set, falling back to 1 default profile: "default"
2024-09-02T19:08:20.090Z  INFO 1 --- [Finansy Portfolio Service] [           main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data R2DBC repositories in DEFAULT mode.
2024-09-02T19:08:20.940Z  INFO 1 --- [Finansy Portfolio Service] [           main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 828 ms. Found 1 R2DBC repository interface.
2024-09-02T19:08:24.691Z DEBUG 1 --- [Finansy Portfolio Service] [           main] s.w.r.r.m.a.RequestMappingHandlerMapping : 5 mappings in 'requestMappingHandlerMapping'
2024-09-02T19:08:24.728Z DEBUG 1 --- [Finansy Portfolio Service] [           main] o.s.w.r.handler.SimpleUrlHandlerMapping  : Patterns [/webjars/**, /**] in 'resourceHandlerMapping'
2024-09-02T19:08:24.824Z DEBUG 1 --- [Finansy Portfolio Service] [           main] o.s.w.r.r.m.a.ControllerMethodResolver   : ControllerAdvice beans: 0 @ModelAttribute, 0 @InitBinder, 1 @ExceptionHandler
2024-09-02T19:08:24.907Z DEBUG 1 --- [Finansy Portfolio Service] [           main] o.s.w.s.adapter.HttpWebHandlerAdapter    : enableLoggingRequestDetails='false': form data and headers will be masked to prevent unsafe logging of potentially sensitive data
2024-09-02T19:08:26.568Z  INFO 1 --- [Finansy Portfolio Service] [           main] o.s.b.web.embedded.netty.NettyWebServer  : Netty started on port 8080 (http)
2024-09-02T19:08:26.615Z  INFO 1 --- [Finansy Portfolio Service] [           main] f.p.s.FinansyPortfolioServiceApplication : Started FinansyPortfolioServiceApplication in 11.949 seconds (process running for 13.818)
Enter fullscreen mode Exit fullscreen mode

Get Service Endpoint Again

➜  ~ kubectl get svc

NAME                TYPE           CLUSTER-IP      EXTERNAL-IP                                                               PORT(S)        AGE
kubernetes          ClusterIP      10.100.0.1      <none>                                                                    443/TCP        88m
portfolio-service   LoadBalancer   10.100.177.55   a39b306d52a8b44fbbedcf670da443f6-1356185046.us-west-2.elb.amazonaws.com   80:31295/TCP   66m
Enter fullscreen mode Exit fullscreen mode

Query the endpoint

➜  ~  curl http://a39b306d52a8b44fbbedcf670da443f6-1356185046.us-west-2.elb.amazonaws.com/api/v1/user-assets
[]%
Enter fullscreen mode Exit fullscreen mode

The server is up and running!

Delete the cluster

To avoid incurring any additional costs.

eksctl delete cluster --name finansy
Enter fullscreen mode Exit fullscreen mode

Top comments (0)