DEV Community

Abhishek Gupta for ITNEXT

Posted on • Updated on

Building Cloud native apps: Intro to Open Application Model and Rudr

Production grade, distributed, cloud native microservices are typically built and operated by multiple co-operative teams. This is easier said than done! Given the scale and complexity, there are so many ways for issues to creep in (and it's not always due to technical reasons). Some of the other factors include lack of segregation of duties, overlapping roles/responsibilities, etc., thus making it critical to be able to clearly distinguish between the areas each team/group is responsible for - a typical example being that of application developers and operations.

This is an intro level blog post to help familiarize you with a couple of projects in the cloud native app development area: Open Application Model and Rudr. You will get an overview of these projects and their core concepts.

The subsequent posts (in this series) will build on top of the concepts discussed in this post and cover practical examples

Open Application Model: What and Why?

Open Application Model (OAM) adopts the "separation of concerns" principle in an attempt to solve this problem. It takes into account the following roles and responsibilities in order to define a specification for application development:

  • Application Developers: whose responsibility is to build, describe and configure applications.
  • Application Operators: work on the platform and are responsible for configuring the runtime components of these of one or more of these microservices.
  • Infrastructure Operators: setup and maintain the core infrastructure on top of which these applications run.

In simple words, OAM is a specification for building cloud native applications. It provides a platform-neutral way of defining these applications by abstracting away a lot of the factors such as cloud providers, orchestration platforms, etc. The standard which it defines can then pave the way for different implementations

An example of one such implementation is Rudr which is covered in thig blog post

Open Application Model: Core Concepts

Open Application Model defines the following components - think of them as abstract concepts which can have different implementations

  • Component Schematic: It is used by a developer to describe an application or service. You can think of the Component Schematic as a "class" since this is just a blueprint of the application
  • Application configuration: It is a combination of components, traits and application scopes which is defined by an application operator. Think of application configuration as the way of creating multiple "instances" of a class (Component Schematic), each with different properties.
  • Traits: An application operator can assign add-on features to components in order to satisfy cross-cutting requirements
  • Workload: It is used to define the runtime type for a particular Component
  • Application scopes: They provide a way to group components with common characteristics into loosely coupled applications

Now that we have a basic idea of OAM and its terminology, let's dive into Rudr

Hello Rudr!

Rudr is a Kubernetes specific implementation of the OAM specification. It defines higher level primitives to provide a layer of abstraction on top of Kubernetes. Rudr implements OAM concepts (such as components, traits, etc.) using Kubernetes resources such as Deployments, Services, Ingress etc. Rudr allows you to define OAM entities using YAML manifests which it internally maps to Kubernetes Custom Resource Definitions (CRDs)

The controller/operator pattern is used to convert these CRDs into concrete Kubernetes resources.

Rudr fulfills OAM promise of providing a way for developers to define applications and other objects such as application config, traits etc. and for operators to define operational capabilities.

Since the previous section covered the OAM concepts, now is a good time to dive into their nitty-gritty from a Rudr point of view.

Rudr Custom Resource Definitions (CRDs)

This section will provide an outline of the Custom Resource Definitions used by Rudr.

Component

A component describes the characteristics of a particular service or application - it is represented by the ComponentSchematic CRD. Here is an example

apiVersion: core.oam.dev/v1alpha1
kind: ComponentSchematic
metadata:
  name: greeter-component
spec:
  workloadType: core.oam.dev/v1alpha1.Server
  containers:
    - name: greeter
      image: abhirockzz/greeter-go
      env:
        - name: GREETING
          fromParam: greeting
      ports:
        - protocol: TCP
          containerPort: 8080
          name: http
      resources:
        cpu:
          required: 0.1
        memory:
          required: "128"
  parameters:
    - name: greeting
      type: string
      default: abhi_tweeter
Enter fullscreen mode Exit fullscreen mode

The sections of a ComponentSchematic CRD are

  • metadata: Used to provide basic information such as name, labels and annotations
  • workloadType: It is used to indicate how the developer wants to run this component. Valid options include: Server, Singleton Server, Task, Singleton Task, Worker, Singleton Worker
  • containers: Resembles a Kubernetes container spec and used to define the runtime configuration required to run a containerized workload for the component
  • parameters: This is an optional section which is used to provide configuration options for the component.

Workloads

As mentioned above, Rudr defines six core workload types. Workload types are simply a field within a component. In addition to the core workloads, extended types can be defined by either implmenting one and adding it directly to Rudr (not so flexible) or using a CRD approach to avoid code changes to Rudr itself.

In the above ComponentSchematic, the workloadType was defined as core.oam.dev/v1alpha1.Server

Application Configuration

A ComponentSchematic does not do anything meaningful by itself. A Rudr based service comes to life using an ApplicationConfiguration which is associated with one or more components and defines how an application is to be instantiated and configured, including parameter overrides and add-on traits.

Here is an example

apiVersion: core.oam.dev/v1alpha1
kind: ApplicationConfiguration
metadata:
  name: greeter-app-config
spec:
  components:
    - componentName: greeter-component
      instanceName: greeter-app
Enter fullscreen mode Exit fullscreen mode

It has these sections

  • metadata: used to provide basic info such as name, labels, and annotations
  • components: used to reference one or more ComponentSchematics, override their parameters (optional) and define additional traits
  • traits: you can apply one or more functionalities to a component using this section e.g. auto scaling, ingress, etc.
  • scopes: you can group multiple components under a scope
  • variables: common values that can be substituted into multiple other locations of the application configuration

Trait

Traits represent features of the system that are operational concerns, as opposed to developer concerns. You can associate them with a component instance to provide additional capabilities such as auto-scaling, persistence, etc.

Here is an example of a trait which

apiVersion: core.oam.dev/v1alpha1
kind: ApplicationConfiguration
metadata:
  name: pv-example
spec:
  components:
    - componentName: rudr-pvc
      instanceName: rudr-pvc1
      traits:
        - name: volume-mounter
          properties:
            volumeName: config-data-vol
            storageClass: default
Enter fullscreen mode Exit fullscreen mode

Each trait has a different set of attributes that are a part of the properties section. At the time of writing, Rudr supports these traits and each of them is applicable to specific workload types

  • Manual Scaler - Server, Task
  • Autoscaler - Server, Task
  • Ingress - Server, SingletonServer
  • Volume Mounter - all core workloads

use kubectl get traits to get a list of supported traits

Scope

At the time of writing, Rudr supports a Health and Network Scope which you can assign to instances to component workloads of an application in the ApplicationConfiguration file to periodically check the aggregate health of components within your application.

use kubectl get scopes to get a list of supported scopes

If you found this interesting and want to dive in deeper, please check out the following resources:

That's enough theory for now πŸ‘ Upcoming posts will help reinforce these concepts with the help of concrete examples. Stay tuned!

Top comments (0)