DEV Community

Abhishek Gupta for ITNEXT

Posted on

How to use Open Application Model to run applications on Kubernetes

The Open Application Model (OAM) is a specification for building cloud-native apps with Rudr as its Kubernetes specific implementation. In this blog, we will look at a couple of examples to reinforce OAM and Rudr related concepts covered in the previous blog.

We will start off by running a simple application on Kubernetes using Rudr components and then see an example of how to use a Rudr Trait.

code is available on GitHub

Pre-requisites

At its core, Rudr is a custom controller which runs as a Kubernetes Deployment. To install Rudr, you will need a Kubernetes cluster with versions 1.15.x or 1.16.x (these are the supported versions at the time of writing). Any cluster will work, but I have used Azure Kubernetes Service for the examples in this blog.

If you want to use AKS, all you need is an Azure subscription (grab a free account here!) and the Azure CLI to setup a managed Kubernetes cluster using the az aks create command.

Here is an example which spins up a single node cluster running Kubernetes version 1.15.7

az aks create --resource-group <AZURE_RESOURCE_GROUP> --name <AKS_CLUSTER_NAME> --kubernetes-version 1.15.7 --node-count 1 --node-vm-size Standard_B2s --node-osdisk-size 30 --generate-ssh-keys

//point kubectl to AKS
az aks get-credentials --resource-group <AZURE_RESOURCE_GROUP> --name <AKS_CLUSTER_NAME>

//confirm
kubectl get nodes
Enter fullscreen mode Exit fullscreen mode

After installing Helm 3, you can proceed with Rudr setup

//clone the repo
git clone https://github.com/oam-dev/rudr.git

//install it using Helm
helm install rudr ./charts/rudr --wait

//confirm Rudr Deployment
kubectl get deployment rudr

//check Rudr CRDs
kubectl get crds -l app.kubernetes.io/part-of=core.oam.dev
Enter fullscreen mode Exit fullscreen mode

Deploy a simple app with Rudr

We will start off with a basic example of deploying a simple application using the following Rudr objects: ComponentSchematic and ApplicationConfig.

The application is very simple - it's a containerized app that exposes an endpoint which responds with Hello World! by default or Hello <greeting> if the GREETING environment variable is set. To run this in Kubernetes, the obvious route is to use a Deployment object. Instead, we will create Rudr Custom Resource Definitions (CRDs) to represent our application, submit them to Kubernetes and let the Rudr controller/operator take care of dealing with specific Kubernetes resources.

Deploy Rudr CRDs

We will start by creating a ComponentSchematic. Let's introspect it:

apiVersion: core.oam.dev/v1alpha1
kind: ComponentSchematic
metadata:
  name: greeter-component
spec:
  workloadType: core.oam.dev/v1alpha1.Server
  containers:
    - name: greeter
      image: abhirockzz/greeter-go
      env:
        - name: GREETING
          fromParam: greeting
      ports:
        - protocol: TCP
          containerPort: 8080
          name: http
      resources:
        cpu:
          required: 0.1
        memory:
          required: "128"
  parameters:
    - name: greeting
      type: string
      default: abhi_tweeter
Enter fullscreen mode Exit fullscreen mode

This is a ComponentSchematic called greeter-component whose workloadType is core.oam.dev/v1alpha1.Server - this determines the type of Kubernetes resource created to handle this component. It has a single container which refers to the abhirockzz/greeter-go image on Docker Hub. The parameters section defines a configurable attribute named greeting whose default value abhi_tweeter. This parameter is referenced as an environment variable in the env attribute of the containers section.

Create the ComponentSchematic as such:

kubectl apply -f https://raw.githubusercontent.com/abhirockzz/rudr-k8s-sample/master/deploy/component.yaml

//output

componentschematic.core.oam.dev/greeter-component created
Enter fullscreen mode Exit fullscreen mode

This will just create a ComponentSchematic object in Kubernetes - you can use kubectl get components to confirm. The ComponentSchematic cannot really do much on its own. It needs another Rudr entity to work with - the ApplicationConfig. Let's see what that looks like in this case:

apiVersion: core.oam.dev/v1alpha1
kind: ApplicationConfiguration
metadata:
  name: greeter-app-config
spec:
  components:
    - componentName: greeter-component
      instanceName: greeter-app
Enter fullscreen mode Exit fullscreen mode

The ApplicationConfig is what instantiates a ComponentSchematic - in this case, it refers to the greeter-component ComponentSchematic.

Create the ApplicationConfig:

kubectl apply -f https://raw.githubusercontent.com/abhirockzz/rudr-k8s-sample/master/deploy/app-config.yaml

//output
applicationconfiguration.core.oam.dev/greeter-app-config configured
Enter fullscreen mode Exit fullscreen mode

To confirm:

kubectl get applicationconfiguration.core.oam.dev/greeter-app-config -o yaml
Enter fullscreen mode Exit fullscreen mode

Take a closer look at the status section - you should see something similar to this:

apiVersion: core.oam.dev/v1alpha1
kind: ApplicationConfiguration
...
status:
  components:
    greeter-component:
      deployment/greeter-app: running
      service/greeter-app: created
  phase: synced
  phase: synced
Enter fullscreen mode Exit fullscreen mode

Check the Kubernetes objects

Rudr created a bunch of Kubernetes resources for us - Deployment, Pod and Service

To check the Deployment

kubectl get deployment/greeter-app

NAME          READY   UP-TO-DATE   AVAILABLE   AGE
greeter-app   1/1     1            1           42s
Enter fullscreen mode Exit fullscreen mode

Check the Pod

kubectl get pod -l=app.kubernetes.io/name=greeter-app-config

NAME                           READY   STATUS    RESTARTS   AGE
greeter-app-586b5d4ddc-wrqtb   1/1     Running   0          2m30s
Enter fullscreen mode Exit fullscreen mode

Finally, the Kubernetes Service resource

kubectl get service/greeter-app

NAME          TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
greeter-app   ClusterIP   10.0.135.117   <none>        8080/TCP   4m15s
Enter fullscreen mode Exit fullscreen mode

Test the application

The simplest way to access the application is using port forwarding

make sure you replace the name of the Pod

kubectl port-forward pod/<pod name> 9090:8080

Forwarding from 127.0.0.1:9090 -> 8080
Forwarding from [::1]:9090 -> 8080
Enter fullscreen mode Exit fullscreen mode

Now you can simply curl the endpoint

curl localhost:9090

//output
Hello abhi_tweeter!
Enter fullscreen mode Exit fullscreen mode

That' it! This was a very simple example of an application running in Kubernetes which was created using Rudr constructs only.

As an excercise, you can try creating the following ApplicationConfig and follow the steps outline above to access the application and what the result is

Note that this ApplicationConfig uses parameterValues section to override the parameters defined in the ComponentSchematic.

apiVersion: core.oam.dev/v1alpha1
kind: ApplicationConfiguration
metadata:
  name: greeter-app-config-2
spec:
  components:
    - componentName: greeter-component
      instanceName: greeter-app-2
      parameterValues:
        - name: greeting
          value: foobar
Enter fullscreen mode Exit fullscreen mode

Using a Trait

In the previous example, the Deployment object which was created had one Pod (single app instance). You can scale it up using kubectl scale command, but I think you get the flow now - we will not do that! Let's make use if the Manual scaler Trait in Rudr to achieve this.

We will continue to use the same ComponentSchematic and introduce a new ApplicationConfig definition to ensure that there are two replicas of our application. We will do this wih the help of manual scaler trait

apiVersion: core.oam.dev/v1alpha1
kind: ApplicationConfiguration
metadata:
  name: greeter-app-config-3
spec:
  components:
    - componentName: greeter-component
      instanceName: scalable-greeter-app
      parameterValues:
        - name: greeting
          value: scalable
      traits:
        - name: manual-scaler
          properties:
            replicaCount: 2
Enter fullscreen mode Exit fullscreen mode

The greeter-app-config-3 ApplicationConfiguration references the greeter-component ComponentSchematic. Notice the traits section where we use a manual-scaler and specify replicaCount as 2. Just to make sure we are able to differentiate this from the previous application, we override the paramater to pass in the value of greeting as scalable

Create the ApplicationConfiguration

kubectl apply -f https://raw.githubusercontent.com/abhirockzz/rudr-k8s-sample/master/deploy/manual-scaler-trait/app-config.yaml

//output
applicationconfiguration.core.oam.dev/greeter-app-config-3 created
Enter fullscreen mode Exit fullscreen mode

Wait for few seconds and confirm that Rudr has triggered creation of Kubernetes objects:

kubectl get applicationconfiguration.core.oam.dev/greeter-app-config-3 -o yaml
Enter fullscreen mode Exit fullscreen mode

You should see a status section

status:
  components:
    greeter-component:
      deployment/scalable-greeter-app: running
      service/scalable-greeter-app: created
  phase: synced
Enter fullscreen mode Exit fullscreen mode

If deployment/scalable-greeter-app in unavailable state, please retry after a 10 seconds or so

Confirm the Deployment object

kubectl get deployment/scalable-greeter-app

NAME                   READY   UP-TO-DATE   AVAILABLE   AGE
scalable-greeter-app   2/2     2            2           4m
Enter fullscreen mode Exit fullscreen mode

Check the individual Pods as well

kubectl get pods -l=app.kubernetes.io/name=greeter-app-config-3

NAME                                    READY   STATUS    RESTARTS   AGE
scalable-greeter-app-6488f64cb4-mj6nw   1/1     Running   0          5m
scalable-greeter-app-6488f64cb4-rpfvp   1/1     Running   0          5m
Enter fullscreen mode Exit fullscreen mode

To access the application, just run a one-off Pod with curl installed in it. Once you're inside the Pod, you can simply use curl $SCALABLE_GREETER_APP_SERVICE_HOST:$SCALABLE_GREETER_APP_SERVICE_PORT to invoke the endpoint of the the application.

kubectl run curl --image=radial/busyboxplus:curl -i --tty --rm

[ root@curl-6bf6db5c4f-5hw6t:/ ]$ curl $SCALABLE_GREETER_APP_SERVICE_HOST:$SCALABLE_GREETER_APP_SERVIC
E_PORT
Hello scalable!
Enter fullscreen mode Exit fullscreen mode

SCALABLE_GREETER_APP_SERVICE_HOST and SCALABLE_GREETER_APP_SERVICE_PORT are available as environment varialbes thanks to the ClusterIP Service created by Rudr.

The expected response is Hello scalable! since we had overridden the greeting parameter in the ApplicationConfiguraton

Clean up

You can use the az aks delete command to delete the entire AKS cluster or delete the individual ApplicationConfiguration to trigger a cascade removal of all the related Kubernetes resources associated with it (Deployment etc.). To remove the Rudr deployment, simply use helm delete rudr and kubectl delete crd -l app.kubernetes.io/part-of=core.oam.dev if you also want to delete the Rudr CRDs (components, configurations etc.)

That's all for this two-part series on the basics of Open Application Model and Rudr along with a hands-on example to get a feel of how to actually use it on Kubernetes. If you found this article helpful, please like and follow 🙌 Happy to get feedback via Twitter or feel free to drop a comment.

Top comments (1)

Collapse
 
shao67 profile image
Alfanso

The Open Application Model (OAM) is a specification designed to simplify the deployment and management of applications on Kubernetes. It abstracts away the details of the underlying infrastructure, allowing developers to focus on defining and describing their applications. To use OAM, developers create Application Configuration files, which specify the components, parameters, and traits of an application. These configurations are then used by OAM controllers to dynamically generate and manage the corresponding Kubernetes resources. OAM introduces the concepts of Components (the building blocks of an application), Traits (additional behaviors or features), and Scopes (enforcement of policies). By utilizing OAM, developers can achieve a higher level of abstraction, making it easier to deploy and maintain applications on Kubernetes without directly dealing with the complexities of the underlying infrastructure.1945 air force mod apk