I received some feedback from my previous post, saying that it would be interesting to see a little more in details the integration to the API from OpenAI.
I was on my way to write a blog post for that, but then I received a reminder that my CKAD certificate was close to be expired π€―
So I decided to create a sandbox environment as a study playground for my exam, and deploy in it a really basic integration to OpenAI API π€
For the deployment of the Kubernetes cluster, I used Amazon Elastic Kubernetes Service (EKS). This service from AWS enables us to have a complete cluster with minimal operational tasks. And in order to have the configuration of the cluster as Infrastructure as Code, I used Terraform.
Having said that, my implementation is literally a sandbox that shouldn't be used for production or even development environments, but that doesn't mean that you cannot have those scenarios with these tools. For production workloads, I would totally recommend having a look at the Terraform Blueprints for EKS. They cover implementations where the best practices for Kubernetes clusters are followed in terms of network, and security, and it has also several add-ons like Argo, Calico, Kafka, and Airflow, among others.
As for the resources deployed inside Kubernetes, they are based on a simple frontend-backend scenario. Each application is based on a K8s Deployment with an init-container and a "normal" container.
The frontend is a really basic React App that sends a prompt to the backend. And the backend is based in Python and FastAPI, with a single HTTP POST method that prepares and forwards the request to OpenAI API. Both frontend and backend applications are then containerized with Docker and pushed to ECR.
Other K8s resources shown are a Job and a CronJob, accessing the backend and frontend respectively.
All of this is on my repository ckad-sandbox, which only has 2 branches. The master branch and a "fix me" branch, which I hope can be useful for the people that are preparing for the CKAD exam.
What I really like about this exam is that is 100% hands-on, which means we have a limited amount of time to fix the issues in the different clusters that are presented. Here you can find more information about the exam, like the curriculum and more details on what topics to study more.
Of course, if you are planning to do the CKA exam, then a "local" K8s cluster is the one you should install and configure, but the scope of the CKAD allows us to focus just on how the applications should be deployed and configured inside K8s. This means we can save study time by not wasting too much energy deploying the cluster, as we can rely on EKS to have this done for us.
Finally, this implementation could be used as a base for real use case scenarios with some adjustments depending on your use case, like:
- Having more than 1 replica, depending on the amount of traffic we would expect for the frontend and backend π³π³π³
- Having a much better definition of limits and quotas for our namespaces.
- Enabling Pod security contexts for our deployments π
- Choose better persistent storage, as currently, it is using the local disk of the nodes, which is not at all recommended for production environments. For this, we could have a look to use EBS volumes as one of the many options.
- Defining an Ingress resource, like the ALB Ingress controller, in order to have a proper way to connect to our application from the outside.
- Enabling Calico in order to provide the possibility to use K8s Network Policies within the cluster, which by default are not enabled in EKS πΈοΈ
I hope this information would be useful in your path to learn more about Kubernetes, EKS, Terraform, and OpenAI.
And any feedback is more than welcomeπ¨βπ«
Top comments (0)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.