DEV Community

Michael Levan
Michael Levan

Posted on

Kubernetes, EKS, and AWS ECS - Is It All The Same Thing?

The orchestration wars, which were between Kubernetes, Mesos, and Docker Swarm was a real thing. The whole idea behind orchestration and choosing how to manage containerized workloads was at the top of everyone’s mind. With how fast containerization is being adopted now, it was a necessary topic for every engineer to truly consider.

In the end, Kubernetes came out on top (which doesn’t mean the other solutions weren’t good because they very much are).

However, that doesn’t mean the war is officially over.

In this blog post, you’ll learn from a theoretical and hands-on perspective what Elastic Container Service is, what orchestration is, and how to implement both AWS EKS and ECS.

What’s Orchestration?

When most engineers or technical leaders think about orchestration, what do they usually think about? Kubernetes.

With that, it’s important to take a step back and actually think about what orchestration is.

Orchestration gives you the ability to:

  • Scale containerized workloads.
  • Automatically schedule where containerized workloads should run.
  • Features like self-healing.

Essentially, orchestration manages the “infrastructure” side of containers for you so you don’t have to.

Before orchestration, you could use containers, but they couldn’t really do much other than run one workload and exit. The reason why is that there was no guarantee that a container could run for a long time. It was very good at testing an application or running a workload and then shutting down the container, but that was about it. Then, once orchestration came along, you could have long-running containers running workloads. If a container went down, a new container would get created automatically.

Thinking about the above, you must ask yourself - is Kubernetes the only way to “orchestrate” containers?

Before wrapping up this section, let’s not forget that orchestration also means orchestrating workloads that aren’t containerized. For example, with HashiCorp Nomad, you can orchestrate workloads that are containerized and that aren’t containerized.

What’s ECS?

Elastic Container Service (ECS) is a way to automatically scale and optimize containerized workloads. If you want a simpler solution other than taking on the beast that is Kubernetes, ECS is a great middle-ground with very similar functionality, capabilities, and outcomes.

ECS scales based on load, performance, and ensures that containers are always up and running. You can either go the Serverless or EC2 route, which you’ll learn about coming up.

When creating an ECS cluster, you have three options:

  • Network only
  • Linux-based
  • Windows-based

The Network Only option deploys ECS using an AWS Fargate profile, which is essentially “Serverless”. It works for both Linux and Windows depending on your workload. What this means is that there are no EC2 instances running Linux or Windows that you have to manage.

Linux-based is ECS that has Linux EC2 instances running the ECS workloads. Windows-based is EC2 that has Windows Servers running the ECS workloads. Luckily, for both of these options, auto-scaling is enabled so you don’t have to worry about scaling up or scaling down the EC2 instances.

The two biggest differences are that Fargate can be a bit more expensive depending on your workload and Fargate does not currently support DaemonSets, which is a Pod that runs on each worker node.

What’s EKS?

Before jumping into EKS, let’s answer the question - What is Kubernetes?

Well, this answer can fill up several books in itself, so let’s keep it brief. As mentioned in the What’s Orchestration? section, Kubernetes is a way to schedule, self-heal, and manage containerized workloads. Yes, way more goes into it, but that’s the overall gist.

EKS is simply a way to run Kubernetes as a Managed Service. When you bootstrap Kubernetes with, for example, Kubeadm on-prem, there’s a lot that goes into it. The overall networking, infrastructure, operating systems, licenses, and management. With EKS, a lot of that is abstracted away from you. You don’t have to worry about managing the Control Planes. Instead, you just worry about the worker nodes which can run as EC2 instances or with Fargate.

EKS is a great solution if you’re in AWS already and want to run workloads in Kubernetes without having a Kubernetes cluster on-prem.

As with all cloud-based Managed Kubernetes Services, you do lose access to the Control Planes. Although that may not be a big deal for some, other organizations may want that control. It’s rare, but something to keep in mind.

The Difference?

As you read through the sections above, you probably thought to yourself ECS and EKS, along with Kubernetes running in every other cloud appear to be doing the same thing.

The answer is yes. Functionally, orchestration is orchestration and at the end of the day, it’s doing one job - orchestrating containers. However, there are differences.

Let’s think about three key differences:

  • How you can run workloads
  • Where you can run workloads
  • Overall support

First, let’s talk about how you can run workloads. Although ECS is great for orchestration, at the time of writing this it only supports the Docker runtime. Although this is fine for almost all organizations, it’s still a hurdle that you’ll have to think about. If you want to use something like Podman, you won’t be able to. Because this is a specific AWS service, you will be limited to its capabilities.

Where you can run these workloads also matters a lot. ECS is dedicated to AWS. That means if you decide you don’t want to use it anymore, you have to think about moving to another solution. If you run Kubernetes in EKS and want to, for example, switch to AKS, it’s not a huge deal. Some Infrastructure-as-Code refactoring and perhaps where you’re pulling the container image from, but that’s about it. The good news is that if you use ECS, the workloads are already containerized. You just have to build for Kubernetes. It’s not like you’re going from applications running on bare metal to Kubernetes. Utilizing ECS, you’re halfway there. ECS is also a better solution for smaller teams because Kubernetes is a beast in itself and may not be needed right away.

Last but not least is the overall support. Kubernetes has an extremely large ecosystem with a lot of organizations utilizing it and a lot of engineers using it. Because of that, finding an answer to a problem with Kubernetes is probably going to be easier than finding an answer to a problem with ECS. At the same time, ECS is very popular in the AWS space and it’s heavily utilized, so finding solutions won’t be impossible. It’s just not as heavily utilized as Kubernetes overall.

Configuring ECS and EKS

Now that you’ve went through the theory, let’s take a look at how to configure ECS and EKS.

There’s a pro and a con to configuring EKS and ECS. The pro is that there are multiple ways to configure them. The con is that there are multiple ways to configure them (see what I did there?).

Because of that, let’s go the Terraform route.

If you don’t use Terraform, that’s okay. Do a quick Google search. Something like “how to configure EKS with X tool”. For example - “how to configure EKS with the AWS CLI”.

There’s a ton of information out there about different configuration methods.

ECS

First, specify the ECS server that you want to create. This will be considered a “cluster”.

resource "aws_ecs_cluster" "levancluster" {
  name = "levanecscluster"

  setting {
    name  = "containerInsights"
    value = "enabled"
  }
}
Enter fullscreen mode Exit fullscreen mode

Next, specify the task definition. A task definition is the containerized workload that you want to run in ECS.

resource "aws_ecs_task_definition" "nginxapp" {
  family                   = "nginxapptask"
  cpu                      = 1024
  memory                   = 2048

  container_definitions = <<DEFINITION
[
  {
    "image": "nginx:latest",
    "name": "nginx",
    "networkMode": "awsvpc",
    "portMappings": [
      {
        "containerPort": 80,
        "hostPort": 80
      }
    ]
  }
]
DEFINITION
}
Enter fullscreen mode Exit fullscreen mode

The last step is to specify an ECS Service. An ECS Service lets you specify how many copies of your task definition (the containerized workload) you wish to run in the ECS cluster. It also attaches the task definition to the ECS service that you created.

resource "aws_ecs_service" "ecsservice" {
  name            = "nginxservice"
  cluster         = "levanecscluster"
  task_definition = aws_ecs_task_definition.nginxapp.arn
  desired_count   = 2

  depends_on = [
    aws_ecs_task_definition.nginxapp
  ]
}
Enter fullscreen mode Exit fullscreen mode

EKS

First, configure the Terraform backend config and provider.

terraform {
  backend "s3" {
    bucket = "name_of_bucket"
    key    = "eks-terraform.tfstate"
    region = "us-east-1"
  }
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Next, create the IAM Role for EKS to have access to the appropriate resources.


resource "aws_iam_role" "eks-iam-role" {
  name = "eks-iam-role"

  path = "/"

  assume_role_policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "eks.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
EOF
}
Enter fullscreen mode Exit fullscreen mode

Attach the IAM policies to the IAM role for EKS and container registry.


resource "aws_iam_role_policy_attachment" "AmazonEKSClusterPolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
  role       = aws_iam_role.eks-iam-role.name
}
resource "aws_iam_role_policy_attachment" "AmazonEC2ContainerRegistryReadOnly-EKS" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
  role       = aws_iam_role.eks-iam-role.name
}

## Create the EKS cluster
resource "aws_eks_cluster" "k8squickstart-eks" {
  name = "k8squickstart-cluster"
  role_arn = aws_iam_role.eks-iam-role.arn

  #enabled_cluster_log_types = ["api", "audit", "scheduler", "controllerManager"]

  vpc_config {
    subnet_ids = [var.subnet_id_1, var.subnet_id_2]
  }

  depends_on = [
    aws_iam_role.eks-iam-role,
  ]
}
Enter fullscreen mode Exit fullscreen mode

Next, configure the Worker Nodes.


resource "aws_iam_role" "workernodes" {
  name = "eks-node-group-example"

  assume_role_policy = jsonencode({
    Statement = [{
      Action = "sts:AssumeRole"
      Effect = "Allow"
      Principal = {
        Service = "ec2.amazonaws.com"
      }
    }]
    Version = "2012-10-17"
  })
}
Enter fullscreen mode Exit fullscreen mode

Attach the appropriate policies to the worker nodes. The reason why you have to do this is because the policies needed for EKS are different than the policies needed for the worker nodes, which are EC2 instances.

resource "aws_iam_role_policy_attachment" "AmazonEKSWorkerNodePolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
  role       = aws_iam_role.workernodes.name
}

resource "aws_iam_role_policy_attachment" "AmazonEKS_CNI_Policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
  role       = aws_iam_role.workernodes.name
}

resource "aws_iam_role_policy_attachment" "EC2InstanceProfileForImageBuilderECRContainerBuilds" {
  policy_arn = "arn:aws:iam::aws:policy/EC2InstanceProfileForImageBuilderECRContainerBuilds"
  role       = aws_iam_role.workernodes.name
}

resource "aws_iam_role_policy_attachment" "AmazonEC2ContainerRegistryReadOnly" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
  role       = aws_iam_role.workernodes.name
}

resource "aws_iam_role_policy_attachment" "CloudWatchAgentServerPolicy-eks" {
  policy_arn = "arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy"
  role       = aws_iam_role.workernodes.name
}
Enter fullscreen mode Exit fullscreen mode

Lastly, create the resource for the worker nodes.

resource "aws_eks_node_group" "worker-node-group" {
  cluster_name    = aws_eks_cluster.k8squickstart-eks.name
  node_group_name = "k8squickstart-workernodes"
  node_role_arn   = aws_iam_role.workernodes.arn
  subnet_ids      = [var.subnet_id_1, var.subnet_id_2]
  instance_types = ["t3.xlarge"]

  scaling_config {
    desired_size = var.desired_size
    max_size     = var.max_size
    min_size     = var.min_size
  }

  depends_on = [
    aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy,
    aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy,
    #aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly,
  ]
}
Enter fullscreen mode Exit fullscreen mode

Conclusion

At the end of the day, Kubernetes/EKS and ECS are both great options. You can’t really go wrong either way from an orchestration perspective. They both do the same thing 95% of the way. The biggest thing you need to think about is how large your workloads are going to become and if you’re comfortable being locked into a specific AWS service.

Top comments (3)

Collapse
 
joeauty profile image
Joe Auty

I think another key distinction, although maybe more is possible in ECS then I give it credit for, is centralization and access control. Running Kubernetes DevOps/SRE teams can run a single test cluster and prod cluster in each desired region, and separate access to specific projects and teams via namespaces and RBAC.

Collapse
 
thenjdevopsguy profile image
Michael Levan

RBAC would be handled the same way for ECS as EKS - AWS IAM.

In terms of single clusters (single tenancy), it can be done the same way with ECS.

Collapse
 
joeauty profile image
Joe Auty

I stand corrected! Anyway, happy to be connected with you here!

If you're interested, I've love your take on a project of mine redactics.com. Just curious what you think about our approach from an engineering perspective, including how we utilize Kubernetes and Helm? No worries if you are busy or uninterested, but I can't resist asking because we're at the point where feedback is incredibly valuable.