DEV Community

mariomerco for AWS Community Builders

Posted on • Edited on

5 Reasons why to learn EKS practicing

Have you ever been in the position of learning some new tool on tech and think “this is awesome!” but when you start applying it find multiple issues that makes it actually harder? Well, if you’ve been there, you definitely work on IT!

That’s not a secret that information technologies evolve all the time and very quick, making things better and faster, but also some times a bit overwhelming. And even beyond that, this scenario could happen in multiple other domains of knowledge and live experiences!

Now, landing a more in our topic, Kubernetes in AWS, there are really good examples, documentations, labs and exercises out there that can get you started with some new knowledge, so this gives you the tools as lego pieces to build your own solution, whether from scratch or with some baseline. This is great and works just like that in most of IT pieces.

But there are times when it is necessary to really look at a project (even be it something small) and take it by the hand with Kubernetes, because in theory you can ignore things that in practice are necessary and that is only (the practice) the one that will be able to fill them. In other words, it is not the same to learn to ride a bicycle through video tutorials rather than riding one. So here are 5 reasons why you should learn EKS by practicing.

1. Kubernetes is complicated - too many moving pieces

Complicated

While Kubernetes is getting more and more like a standard when talking about container orchestration, is also truth that handling for production workloads is challenging. Kubernetes is a set of multiple components as a baseline, then start counting as you deploy pods, configmaps, secrets, services, and a long etc (not even talking about CRDs…), so before commiting something to production, it requires testing of, not only the app itself, but it’s infrastructure deployed in Kubernetes.

2. Networking needs attention

Network

As you start deploying your apps (specially if they talk to each other) you immediately will face with DNS, IPs, load balancing, etc. Although most of this is handled with simple concepts by Kubernetes, your underlying infrastructure requires a well established networking infrastructure.

3. IAM permissions is very granular

Permissions

If developers has a wide access to the AWS space, they’ll code their apps using the AWS SDK probably without worrying about permissions… until it gets Kubernetes! The containers will try to request access to the AWS API and, if the IAM role that is embracing the application is not setup with the proper permissions, will simply fail.

There are multiple solutions for this like Kube2IAM, KIAM, and IAM Roles for Service Accounts which, if we are in AWS and EKS (running in EC2 instances), this one is my go-to 😎.

4. Automation also requires testing

Automation

Automation is, in most of the cases, related to coding, and coding is also related to bugs. So in this case, I would take the word "practice" replaced by "test". So the CICD workflows that you might want to create is, at the end, code running somewhere, and it can come with issues. That's why having multiple environments (at least a TEST environment) before the production one is important to actually test how the provisioning of your resources are going to happen.

Kubernetes also falls in this section, because it automates the orchestration of containers based on the configurations that you provide. But if your configs are wrong, they could lead you to a deployment issue or missconfigured environment.

In summary, always TEST!.

5. Watch out the $orpri$e$ 💲💲💲

Wasting money

This is simple: The more nodes you add, the more money you pay. For example, one of the main ideas of having containers and Kubernetes on top of them is Autoscaling, and there are multiple ways, Cluster Autoscaling, Horizontal Pod Autoscaler, and Vertical Pod Autoscaler. The best to set all of this up always depends on the type of application you are building. You'll have to understand how it behaves, what's the best metric to take for scaling, etc. and if this is not taken with care, could scale up without really needing it and costing you much more. Or, even worst! it could scale down very aggressively and damaging its availability and responses, impacting directly the end user! So better be prepared testing scenarios and developing a strategy of reviewing theses tasks as your business increase in end users.

Sounds scary 🎃

And it may be 😅, but really EKS has been growing and maturing in the tools it provides to make learning and maintenance easier. From Managed Worker Nodes, add-ons and automatic updates, to running containers in Fargate (the Serverless way) and thousands of opensource integrations by the Kubernetes community, AWS and both! So, although it sounds complex, EKS provides facilities to make it an attractive service and a productive, safe and cost-effective solution.

PD: If you have A Cloud Guru subscription, I invite you to my course A Practical Guide To Amazon EKS, where many of these topics we cover practicing!.

Top comments (1)

Collapse
 
glnds profile image
Info Comment hidden by post author - thread only accessible via permalink
Gert Leenders

... and for these reasons I always recommend ECS in favour of EKS 😄 (except if you're already an K8s guru and very familiar with AWS or if you would have very, very specific needs). IMHO ECS is easier but less trendy...

I also hope we could stop wondering about the container orchestrator one day. All we want is just running and scaling containers.... like Werner said in his post: Changing the calculus of containers in the cloud.

Some comments have been hidden by the post's author - find out more