DEV Community

Ken Moini
Ken Moini

Posted on

All-in-one Kubernetes Host on DigitalOcean

So I'm going to kick off this self-imposed #100daysofcode tournament with a public spectacle! Now, if you know anything about me, you'd know that I love Kubernetes - OpenShift is my favorite flavor, but I also like vanilla and Kubernetes-as-a-Service. Thing with that is that you kinda get some lock-in, and some vendors are a little behind on versions. That's totally understandable with a 9-month lifecycle, so I'm not knocking anyone who wants to verify that their platform can effectively deliver new releases as a service.

Cop some Kubernetes

I've been developing a new Terraform/Ansible set of scripts to deploy a scalable Kubernetes cluster on different infrastructures outside of the Big Three (AWS, Azure, GCP). Now, that's kinda overkill for simple development purposes so today we'll be deploying a single-node, all-in-one Kubernetes host on DigitalOcean with an Ansible Playbook and Terraform script!

We could just use DigitalOcean's managed Kubernetes service, but you're limited to one cluster, and the latest version as of this writing is 1.14 and 1.17 is out.

We could use Minikube and develop locally, but deploying some things such as SSO really does require an FQDN that is routable from the Internet, which makes DNS also easier instead of having to deploy DNS locally.

Anywho, let's get a move on, gotta set some things up in DigitalOcean, build this here Ansible Playbook and Terraform...script (?), and get to deploying on Kubernetes!

DigitalOcean Setup

There isn't too much to do in DigitalOcean really. Just some quick basic API Keys is all you need, outside of an account. So real quick...

  1. Sign up for DigitalOcean - $100 credit for new customers with that referral link there!
  2. Create a DigitalOcean API Personal Access Token - this will be used to authenticate requests to their API with your account. Save this API Key somewhere safe as you'll only be able to see it once when generated.
  3. Have a Domain pointing to DigitalOcean's DNS, with the corresponding zone. Don't need a fancy zone, just a basic one will do actually since we'll provision everything else with some API calls.

Great, we've got an account, keys, a domain and DNS, let's get to rocking the next part. You'll need a few things in order to get to automatin' and deployin'.

Developer Setup

For these purposes, we'll assume you're running on a *nix system, be that Linux, Mac OS X, or Windows Subsystem for Linux (WSL). I personally run it from WSL myself, usually out of the VSCode terminal - you can do it natively on Windows with Git Bash but it's kinda pokey and you run into issues with SSH key permissions with NTFS stores...anywho, yeah, *nix system of sorts, moving on!

Terraform

Let's start with Terraform real quick since it's probably the easiest to grab. It's a binary, made by Hashicorp who makes a bunch of great DevOps-y solutions and Terraform is used to deploy infrastructure into different providers such as public cloud services.

You could do it all in Ansible, but I find the Terraform provider to be more mature for DigitalOcean than the Ansible modules are. Plus, one benefit of using Terraform is that it automatically creates a destruction function to tear down the environment very quickly.

One last note is that in working with Terraform, you'll likely find resources and code online that doesn't work for some reason - that's probably because it was pre-version 0.12. Between 0.11 and 0.12 there were a number of things changed that broke ABI compatibility between versions a bit.

Download Terraform

  1. Grab the latest version of Terraform from the Downloads page
  2. Unzip the file, add the executable permission bit, and move to somewhere within your $PATH
$ wget https://releases.hashicorp.com/terraform/0.12.19/terraform_0.12.19_linux_amd64.zip
$ unzip terraform_*.zip
$ chmod +x terraform
$ sudo mv terraform /usr/local/bin/
$ terraform version

That should return the installed Terraform version, long as /usr/local/bin is in your $PATH.
If you don't know what is available in your $PATH, just do a quick echo $PATH at the terminal prompt.

Ansible for Automation

Terraform is great for infrastructure management, and Ansible can do that too, but Ansible shines for configuration management and general automation. Don't just take it from me - the maintainers of Ansible, Red Hat, who also maintain OpenShift used Ansible in OpenShift version 3.xx, but moved to Terraform and Ignition in OpenShift version 4.xx.

Ansible is verbose, simple, and efficient automation for almost anything. It does so in an idempotent way (mostly, it's per-module support based), which means that the state listed in the Ansible Playbook will be the state enacted. I've done crazy stuff with it like fire off Ansible Jobs in Ansible Tower from an Amazon Echo response that was relayed from a Lambda function in order to restart my network core by voice. It's very versatile...

Install Ansible

So there are 8-ways-to-Sunday when it comes to how to install Ansible. Can be done from Git, pip, your operating system's package manager, and more. I suggest installing the Ansible repositories for your operating system and installing via the package manager to get the latest versions with streamlined updates.

Without dragging this thing out, I'll simply guide you to the right place to get your Ansible: https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html

kubectl (totally not kube-cuddle)

You'll also need kubectl locally as well. It's another binary, this time from Google. You can find the official download and installation instructions here: https://kubernetes.io/docs/tasks/tools/install-kubectl/

BYOE (Bring Your Own Editor)

You will also likely benefit from the use of an editor or IDE - I suggest VSCode.

Automating DNS

So honestly, the DigitalOcean DNS functions in Ansible and Terraform currently aren't robust enough for most anything outside of a basic @ A record for a domain. That's not too bad, we can do some basic automation via Bash and cURL!

Go ahead and grab a copy of this file: https://gist.githubusercontent.com/kenmoini/40d8e53212afa91a5b33e84cb2c2ac3b/raw/615b6242c56fb76cb1969126005f92bff2f5528b/config_dns.sh

$ wget https://gist.githubusercontent.com/kenmoini/40d8e53212afa91a5b33e84cb2c2ac3b/raw/615b6242c56fb76cb1969126005f92bff2f5528b/config_dns.sh
$ chmod +x configure_dns.sh
$ ./configure_dns.sh
  • You'll need the curl and jq programs installed, this script has a function called checksForProgram which does so and will complain if not.
  • It can be used to set all sorts of DNS Records at DigitalOcean - the helper displayed when running the script bare will detail the available arguments.
  • This script is called by the Terraform provisioner after everything is configured, so no need to call it right now manually.

Terraform this DigitalOcean

So the first thing we need is a simple DO Droplet - and well, that's really about it. There's not much infrastructure we need to get going since it's just a single-node K8s instance. DNS is handled via that handy-dandy script from just a moment ago.

First, let's create a new directory and a few files in there.

$ mkdir do-aio-k8s
$ cd do-aio-k8s
$ touch deploy_droplet.tf

NOTE: If you plan on launching this into a Git repo, make sure to create a .gitignore file to hide all those private .tfstate files and stuff that's generated...

Add the following code to the deploy_droplet.tf file

  • Historically, it's seemed that NYC3 has been one of the more available and feature-rich regions DO has available - feel free to change that.
  • You can query the API for sizes and slugs
  • The domain needs to be pointing to DO DNS already, with a basic DNS Zone in the DO Panel.
  • The stack_name will be the subdomain for the cluster, eg aiok8s.example.com

Pop open the deployer.tf file next, and drop this jazz in:

NOTE: Also a good idea to add that ./inventory file to the .gitignore file...

A few things going on in here,

  • Setting a few variables that will be used at the command prompt when applying/destroying the resources that pass in our DigitalOcean Personal Access Token.
  • Setting what Terraform Provider we're using, in this case, DigitalOcean
  • We create a new SSH Key, store it locally to use in Ansible later, and pass it to DigitalOcean as well.
  • Within the locals block we define a local temporary variable, the ssh_fingerprint, used when deploying the Droplet
  • Terraform can use template files and generate the output to a variable. We do that with an inventory.tpl file and generate the Ansible inventory file used later.
  • We'll take that variable rendered from the template_files data block and generate a local_file resource which holds our Ansible inventory.
  • Finally, we'll generate a digitalocean_droplet resource, pulling in a bunch of those variables we defined earlier.

One of the cool things about Terraform is that you can kind of just slap things in wherever, it'll figure out the dependencies and where it doesn't you can manually specify with depends_on.

Real quick, let's make that inventory.tpl file.

Pretty simple there, we'll take in the generated Droplet and where the SSH key file used to deploy it and connect to it is. Terraform will use this to generate the ./inventory file.

Now that we have all the pieces to deploy the Droplet, let's roll together a few commands to do so...

$ export DO_PAT="your-token-here"
$ terraform init
$ terraform plan -var "do_token=${DO_PAT}"
$ terraform apply -var "do_token=${DO_PAT}"
  • First, replace your-token-here with your DO Personal Access Token from earlier.
  • Initialize the Terraform environment. This will scan for providers and download the latest versions from the Terraform Cloud.
  • Terraform will produce a plan of what will be done, also good for validating the steps taken.
  • Nothing is done till you actually run the terraform apply command to push your infrastructure.
  • Inversely, you can run the following to destroy the environment:
$ terraform destroy -var "do_token=${DO_PAT}"

Great, we've got a single Droplet running, the DNS is configured, we've got our infrastructure. Now let's use some Ansible to configure the host and deploy Kubernetes in a single-node fashion.

I <3 Automation

Another thing that you should know about me - I try to automate everything. I got a fancy pour-over Chemex coffee maker, and then started to think of ways to automate it - cause it takes a bit of labor and time. What if there were a device to heat water to a certain point, pour it over in a circular pattern over a held filter of ground beans that dripped into a pot...wait, damn, that's just a normal el-cheapo $25 coffee maker isn't it?

Anywho, let's create an Ansible Playbook. Ansible is a bit more linear, so it doesn't just figure out the dependencies and it doesn't create a destroy routine, you have to build that. Thankfully, like Kubernetes, it's really good at maintaining states and we don't have to worry about uninstalling Kubernetes from this system since it's just as easy to just tear it down and start over.

Let's make a new file called ansible_configure_node.yaml and drop the following code in there:

WOW! That's a spicy meatball! And it pretty much comes with all the sauce. Doing a lot of things here, I won't go into too much detail but at a high level we're:

  • You'll need to edit lines 24-25 to reflect your domains and stack_name from earlier.
  • Creating a line in the /etc/hosts file of every host in the inventory (handy!)
  • Configuring kernel modules, a few system settings, disabling SWAP since K8s h8s it
  • Updating the system and installing some packages such as Docker, starting services
  • Installing Kubernetes as a single-node control plane, removing the taint preventing workloads on masters, and deploying some helpful things such as your container networking, a dashboard, and an admin-user ServiceAccount to authenticate easily with
  • Pulling the kube.conf file and replacing the internal IP with the external IP
  • Echo details on how to access the Kubernetes node

So let's rock and roll that Playbook out the door, run the following command to configure the node we just provisioned with Terraform to operate as an AIO K8s node:

$ ansible-playbook -i inventory ./ansible_configure_node.yaml
  • The -i flag specifies the inventory file, which we generated earlier with Terraform
  • This will operate top-to-bottom to build you a car and hand you the keys. You could even string this playbook execution along as an additional local-exec call when creating the Droplet in Terraform for a single-line deployment.
  • You can also run ansible-playbook -i inventory ./ansible_configure_node.yaml --tags get_token in case you need to just access the authentication token.

That's about it, sit back and watch it make you a big ol' self-contained (heh, contained...containers...) Kubernetes node.

Accessing the Kubernetes Node & Dashboard

Terraform created an SSH key which we stored in a directory/file called ./{ stack_name }.{ domain }/priv.pem. Use that to authenticate to the node via SSH if need be, but this is kind of an anti-pattern to access Kubernetes really. Ideally just pop open kubectl proxy and navigate to your needed Service such as the Dashboard.

Speaking of, when you run the Ansible Playbook, it'll provide you instructions to access the Kubernetes cluster and dashboard, but for convenience, here it is as well:

# after running the ansible-playbook command...
$ export KUBECONFIG=./pulled-kube.conf # do this once, per terminal session
$ kubectl cluster-info
$ kubectl proxy

Once you have the node provisioned, export a variable called KUBECONFIG where the downloaded ./pulled-kube.conf file is. Do that once for while your session is open in your terminal, or to make it permanent you can merge it into your $HOME/.kube/config file, or add the export to your .bashrc file. I find it best to just use the exported session variable that way I can still easily access other Kubernetes clusters and contexts.

The kubectl cluster-info command is kinda extra but handy to check as a quick test.

Running the kubectl proxy command will create a proxy between your computer and the Kubernetes cluster which you can then use to access private services, such as the Kubernetes Dashboard at the following address: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

That's all, Doc!

Well, that was fun! A few scripts and we've got some Infrastructure-as-Code, Configuration-as-Code, and within about 15 minutes we've got ourselves a nice self-contained Kubernetes node! Hopefully this helps, if you have any questions or suggestions, feel free to share! (yes, I know I could have used the k8s Ansible module instead of shell but whatever, ways to skin cats and all...)

You can find all the source code for this via a Gist: https://gist.github.com/kenmoini/40d8e53212afa91a5b33e84cb2c2ac3b

Thanks for following along and check back later for more Tales from the Script!

Additional Notes

Top comments (0)