DEV Community

ndesmic
ndesmic

Posted on • Updated on

Building a Basic Local Kubernetes/Docker Setup

I found myself needing to trying to test out some Kubernetes stuff locally. In order to do so I needed to build a cluster but I actually found the resources that came up a bit lacking. I actually got stuck trying to figure out how to hit my cluster externally. So once I figure out what I needed to know I decided to try a few different approaches to making a basic setup for container orchestration setup. Hopefully this is helpful to someone.

The Tools

The tools we need are:

Note: You may need to reboot and check that hardware virtualization is enabled in your BIOS if you are fresh installing docker.

I feel like a real devops already.

A simple app

I built a simple app using Deno. Mostly because Deno is cool and this primarily a javascript blog. You can use node too, the app code isn't going to be the biggest focus.

const port = 8080;

const server = Deno.listen({ port });
console.log(`Started on port ${port}}`);

async function handle(connection){
    const httpConnection = Deno.serveHttp(connection);
    for await(const requestEvent of httpConnection){
        requestEvent.respondWith(new Response(`Hello for Application!`, {
            status: 200,
            header: {
                "Content-Type" : "text/plain"
            }
        }))
    }
}

for await (const connection of server){
    handle(connection);
}
Enter fullscreen mode Exit fullscreen mode

Deno is super simple, don't even need a single import for this.

You can run it with deno run --allow-all --unstable app/server.js. If you haven't used deno --allow-all means we're allowing all permissions which are naturally restricted unlike node. Really --allow-net is better but for a little test project --allow-all covers the bases so you don't need to fiddle with it if you use more. --unstable is because there's a few unstable things in deno that this depends on. Hopefully that gets taken care of soon.

Once running you should be able to hit it one port 8080.

At this point I will start making scripts for all my steps so that I can run them without remembering all the flags and stuff. I typically do this as shell scripts which are a little weird to use on Windows. If you have git installed (and I don't know why anyone reading this wouldn't) you can add C:\Program Files\Git\bin to your path, which lets you use sh. But if you are a Powershell user that probably works too, or WSL. The point is to start building these steps up. I also sometimes use a package.json to run them as scripts because that's actually a really handy feature even if I don't need node.

Running an application in Docker

We can run the app. Now let's run the app in a container. First we need a dockerfile:

FROM denoland/deno

ADD . .

CMD ["deno", "run", "--allow-all", "--unstable", "app/server.js"]
Enter fullscreen mode Exit fullscreen mode

The first line says we're inheriting from the denoland/deno image which as you might expect contains deno (it's the official image and it's base OS is Debian Linux if you're wondering). These name refer to images on dockerhub by default. If you are using node or something else find one of the official images there and replace the name.

The second line say we're going to copy things from the host's current working directory (".") to the current working directory inside the container (you can change the paths if necessary).

The last line is the command that is run when you start the container. You can use a string but the array format is preferred as it doesn't go through the shell. All we're doing is the same command we used to run the server on the outside.

Now let's build our image. docker build -f docker/app.dockerfile -t my-app . where "docker/app.dockerfile" is the path to the dockerfile (it will use ./dockerfile by default) and "my-app" is the name or tag you want to give the image. I highly recommend you tag the images otherwise you have to look them up when you want to delete them.

If docker is installed and running it will download some stuff and build the container. At this point I'd recommend using the official docker extension for vscode if you are using vscode like me. Otherwise you'll need to learn some more docker commands to list images and running containers. Once we have the image we can then launch a container. A container is really just a light-weight VM. I say "light-weight" but they're still a couple hundred megabytes so you may wish to clean up after yourself. To run it use docker run -p 8080:8080 --name my-app -it my-app

"--name my-app" is the name of the container "my-app" at the end is the name you gave the image you wish to build. "-it" means you want to get the terminal output and interact with the process, this is technically optional. -p 8080:8080 is how the host port connects to the container port. The left is the host, the right is the container, so we're just passing though port 8080.

At this point you should see the same running output and be able to hit localhost:8080 in the browser and get the same result, this time it's in a container and we can move this container anywhere you want and it will still work as long as you have docker.

Lastly, let's clean it up. If we make changes we need to remove the running container (as there can't be 2 with the same name). We may also want to remove the image just to save space.

docker rm -f my-app will kill the container. The -f is force because we don't care about the state of it, if we did then we might be gentler. docker image rm my-app will remove the image.

Next you can choose one of two tools for running a local cluster. A "cluster" is basically a kubernetes universe.

Setup a cluster with Kind

First we'll start by using Kind as a cluster. You can get install instructions here: https://kind.sigs.k8s.io/docs/user/quick-start/#installation

Once setup all we need to do is run kind create cluster --name my-app to create a new cluster.

Before we can deploy we need the image for "my-app" to be available to the cluster.

kind load docker-image my-app --name my-app

The first "my-app" is the image name. You'll need to have built the image with docker first with the name "my-app". The --name "my-app" is the cluster name which is needed to disambiguate, it corresponds to the name when creating the cluster.

Once you are done with the cluster and want to delete it you use kind delete cluster --name my-app. I suggest you do this after each test to make sure you are in a clean state.

Setup a cluster with Minikube

If you don't want kind we could also use minikube which is another way to run Kubernetes clusters locally. You can find install instructions here: https://v1-18.docs.kubernetes.io/docs/tasks/tools/install-minikube/.

Starting is easy enough it's just minikube start. Deleting is just as east minikube delete.

To get images onto minikube we can use the command minikube image load my-app where "my-app" is the name of your docker image. Note: Searching how to do this will bring up a bunch of other ways but this seems to be the latest and easiest way to do it.

Build a kubernetes deployment

Oh boy, here we go. At least the cluster is easy enough to setup.

Once a cluster is created there's really nothing in it. We need to create "pods". Pods are the smallest unit infrastructure in kubernetes. They represent a co-located set of containers (or just one container in the simple case). Abstractly they represent one instance of an application. To create a pod that runs our application we need to dive into some yaml.

The first sort of resource we'll setup is called a "deployment". A deployment is a group of pods.

//deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-deploy
spec:
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: app
        image: my-app
        imagePullPolicy: Never
        env:
          - name: MESSAGE
            value: Hello World
        ports:
        - containerPort: 8080
Enter fullscreen mode Exit fullscreen mode

First we start with what's basically an XML schema definition. Then the "kind" which is the type of resource we're creating. We can give the deploy metadata which is a bunch of freeform tags. "spec" is the actual blueprint to build. Unlike most other resources deployments are done with templates which means we're giving the blueprint for a configuration of a pod rather than the configuration of the pod itself (we could also do the latter with a kind "pod"). This is because we might make many pods with the same blueprint like in an autoscaling environment. The "selector" is like a CSS selector, we find pods matching the tags (in this case app = "my-app") and update them if they exist.

Now to describe the pods themselves. Each pod might have metadata as well in this case a name (the same name we select above). The spec for the pod includes the containers in the pod. We just have one. We give it a name and the name of the image (corresponds to the tag or "-t" argument you gave docker). "imagePullPolicy" tell it we don't want to automatically download the image. Typically you cluster up in the cloud will download images from a registry like dockerhub, but since we're using local images that will fail so we're turning it off. Lastly is the port which is the port that gets exposed. By saying containerPort 8080 the container will expose 8080 to the rest of the pod. As a bonus I've added some "env". These are environment variable that are given to the pod and one of the simplest ways to pass data in.

Since we're adding an environment variable it makes sense to read it so I'll update the application slightly (remember to rebuild the images!).

const port = 8080;

const server = Deno.listen({ port });
console.log(`Started on port ${port}`);

async function handle(connection){
    const httpConnection = Deno.serveHttp(connection);
    for await(const requestEvent of httpConnection){
        const message = Deno.env.get("MESSAGE") ?? "";
        requestEvent.respondWith(new Response(`${message} from Application!`, {
            status: 200,
            header: {
                "Content-Type" : "text/plain"
            }
        }))
    }
}

for await (const connection of server){
    handle(connection);
}
Enter fullscreen mode Exit fullscreen mode

To actually apply this deployment we use kubectl apply -f path/to/deployment.yaml. The -f here means "use this file". Also important to note that we're describing the state it should be in, 1 pod running our app and kubernetes will try it's best to maintain that. Also keep in mind yaml is whitespace sensitive, if you have errors it might be because things aren't lined-up correctly.

At this point we have a pod but we can't really do anything with it because it's not accessible.

Build a service

Pods in kubernetes are ephemeral, they can be created and destroyed each time with random addresses so we need a stable way to reach a group of resources at a specified address. This is where a service comes into play. We can add a service to the deployment.yaml. Yaml lets us declare multiple documents in the same file as long as the documents are delimited with \n---\n. So like:

//Deployment spec
---
//Service spec
Enter fullscreen mode Exit fullscreen mode

We'll look at the service spec now.

apiVersion: v1
kind: Service
metadata:
  name: my-app-service
spec:
  selector:
    app: my-app
  ports:
  - port: 8080
    targetPort: 8080
Enter fullscreen mode Exit fullscreen mode

It's a kind "service" with a metadata name. Should be straightforward enough. For the spec we select all pods with app=my-app. The "targetPort" is the port on the pod and "port" is the port that the service is exposed on. Now inside the cluster 8080 will refer to this service, which is pointed at our app pod's port 8080 so we should see the app.

Expose it externally

We have a service which gives us a stable address, now we need to expose it so we can access the service externally. We can do this with port forwarding.

kubectl port-forward service/my-app-service 8080:8080

service/my-app-service means it's a service with name "my-app-service" which we labeled it above. Then we say we want to expose it's port 8080 (services could have multiple ports) on external port 8080.

With this you should be able to visit localhost:8080 and see the application. In fact, you should be able to see the message "Hello World from Application!" because it's reading the environment variable we gave it.

This is a very simple passthrough. For things like load balancing you need something more complicated called an ingress which has to be configured. I won't be dealing with that today.

Debugging

I won't go too deep into debugging but there's a few basic commands to know to help you unstick yourself.

  • kubectl get {resource} - where {resource} is "pods", "service" etc. This will list out all of that resource in the cluster so you can see what's there (did the service actually get created?). Specifically you can see if the pods are healthy.
  • kubectl describe {resource} {name} - where name is the name of the resource (you can find it from the get command). This will do a more detailed description of the resource.
  • kubectl logs {podName} {containerName} - You can get the generated {podName} from the get command, the container name is the name of the container as defined in the yaml configuration. This will dump all the standard console logs produced by the running process.
  • kubectl exec {podName} {containerName} -it -- /bin/bash - This will open a terminal in the container on the pod so you can run commands. Depending on the Linux version you might need to run "/bin/sh" instead. Great for running tools like curl to explore the environment.

Bonus: Docker Swarm

Kubernetes is fine and all, it's probably what your enterprise-grade company will use but sometimes, just sometimes, you want a simple cloud setup with less complexity and yaml (note there's still some yaml). For this there's docker swarm. I'm a bit sad it didn't take off like Kubernetes because at least as a layman in scalable cloud applications I find it to be a lot more elegant and you need 2 less binaries to run it.

By default swarm is not enabled in docker. You need to run docker swarm init.

Docker swarm is configured with a docker-compose.yaml file which similar to the kubernetes deploy.yaml we built (thought far simpler). It used to be the case that you would use a related tool called "docker-compose" to build environments from these files though much of it is now built-in to docker itself though docker-compose is still hanging around with slightly different use-cases (it can build images before deploying) but it seems mostly obsolete to me because we do the same thing just in 2 steps. Let's start with a compose file:

//app.docker-compose.yaml
version: "3.8"

services:
  my-app:
    image: "my-app"
    environment:
      - MESSAGE="Hello Swarm"
    ports:
      - "8080:8080"
Enter fullscreen mode Exit fullscreen mode

It start with a version which is not dissimilar from the kubernetes version and notes what version spec the file conforms to. Then we list out services which are similar to kubernetes services, it has one or many containers "my-app" using image "my-app" that we built earlier. We can pass in environment just like kubernetes and we expose container port 8080 on host port 8080. The latter line does the same thing the service would have done in kubernetes, we setup a direct line from localhost:8080 to the container's port 8080 in one line.

To deploy it to the docker engine use the command:

docker stack deploy -c docker/app.docker-compose.yaml my-app

The -c indicates the compose file.

BTW if you get a message like "services.my-app Additional property {name} is not allowed" you probably misspelled something.

If all is well this will start you app on localhost:8080.

The swam will persist between reboots. To get rid of it use docker stack rm my-app

Debugging Swarm

I have less experience here but the main commands you can use are:

  • docker service ls - To list services
  • docker ps - To list docker process, that is running containers
  • docker exec -it {containerId} /bin/bash - get a shell to the running container (use /bin/sh depending on Linux version). You can lookup the container id from docker ps.

Beyond

This was just a very basic tutorial. The next steps might be adding replicas or change the resource limits on pods/containers. These tools are ridiculously complex.

You can find the full code here: https://github.com/ndesmic/orchestration-basics/tree/v1.0

Top comments (0)