DEV Community

Mahra Rahimi
Mahra Rahimi

Posted on

Refactoring GitOps repository to support both real-time and reconciliation window changes

Restructuring GitOps repository to be able to enable multiple reconciliation types. eg real-time and reconciliation window changes with the approach described in the previous part.

For some scenarios allowing only updates to be applied during a reconciliation window is not enough.
There are cases when some application resources should be managed in real time, but others are still only allowed to change during a reconciliation window.
The example we use here is a nginx deployment to the cluster, which contains a Deployment, Service, and a ConfigMap manifest.
The ConfigMap, which defines the nginx.conf should me manageable in real time. However, the Deployment and the Service should only be changed with in a reconciliation window.

Hence, the problem statement changes slightly from the last part:

We want to enable two ways of applying changes to a cluster using Flux:

  • Real-time changes: Representing the default behavior of Flux when it comes to reconciling changes.
  • Reconciliation windows changes: Predefined time windows in which a change can be applied to the resource by Flux.

We can still use the core approach shown here to solve our new problem. However, we need to make some adjustments to how we organize our GitOps repository, to enable real-time as well as reconciliation window changes.

Even though we are only demonstrating the restructuring of this GitOps repository on two reconciliation types. This approach can easily be extended for more types. Just note that, for each new type of reconciliation window, corresponding set of of CronJobs are needed to manage the new windows.

Pre-requisits:

  • IMPORTANT: If you haven't already read the first part, go back and do so, as we will use its approach on how to enable the reconciliation window in this blog.
  • Intermediate knowledge of Flux, Kustomize and K8s

Core Principles

Before we start restructuring the repository, it might be useful to understand why we have to do so in the first place.

As covered in the previous blog, to be able to control the reconciliation cycle differently for a group of resources, these resources need to be managed by an independent Kustomization resource.

Because of this the goal of the following sections are:
"Restructure the GitOps repository such that its resources can be managed by one of the N-Kustomization resources we will create.
Where N defines the number of schedules for applying changes."

As in this blog we are only interested in real-time and reconciliation window changes, N is equal to 2.

Set up

1. Set up your applications or components

Let's start with the smallest unit of grouping we have in our GitOps repository: apps

Looking at the example in this sample, under apps we have an nginx folder, which contains the Deployment, a Service, and a ConfigMap manifest.

apps
└── nginx
    ├── kustomization.yaml
    ├── deployment.yaml
    ├── service.yaml
    └── configmap.yaml
Enter fullscreen mode Exit fullscreen mode

As mentioned, we want to now make sure we can change the nginx server configuration, defined in the configmap.yaml in real time, but infrastructure changes such as deployment and the service should only change between Monday 8 am to Thursday 5 pm.

To enable this, the first step is to make sure we can split resources that can be changed real-time from resources that can only change state during a reconciliation window from kustomizes point of view.

Note: If you are not familiar with how kustomize is used to manage resources check out the official doc from Kubernetes on this at Overview of Kustomize

One of the ways we can achieve this is by splitting all the resources for each application we have defined under apps/ (see default GitOps folder structure for mono repos) into two versions. These versions' sole purpose is to package the resources to be either managed by the real-time or the reconciliation window Kustomization resource.

We can then split all manifest files into these two subfolders and add the respective suffixes to the subfolders:

  • Real-time changes: -rt
  • Reconciliation windows changes: -rw

Original structure:

apps
└── nginx
    ├── kustomization.yaml
    ├── deployment.yaml
    ├── service.yaml
    └── configmap.yaml
Enter fullscreen mode Exit fullscreen mode

Enabeling real time and reconciliation windows changes:

apps
└── nginx
    ├── nginx-rt
    │   ├── kustomization.yaml
    │   └── configmap.yaml
    └── nginx-rw
        ├── kustomization.yaml
        ├── deployment.yaml
        └── service.yaml
Enter fullscreen mode Exit fullscreen mode

The result of this splitting you can see in the sample repository here

2. Set up your clusters

The next step is to restructure the clusters directory. The goal is to make sure we can create two independents Kustomization resources. This means we need two entry points to point each of the Kustomization resources to.
For that we split the previous apps into two subfolders, apps-rt/apps-rw.
Where ./cluster/<cluster_name>/apps/apps-rt will be the entry point for the real-time Kustomization resources and ./cluster/<cluster_name>/apps/apps-rw for the reconciliation window controller.

Original structure:

clusters/cluster-1
├── apps
│    └── nginx
└── infra
     └── reconciliation-windows
Enter fullscreen mode Exit fullscreen mode

Enabeling real time and reconciliation windows changes:

clusters/cluster-1
├── apps
│   ├── apps-rw
│   │   └── nginx
│   └── apps-rt
│       └── nginx
└── infra
      └── reconciliation-windows
Enter fullscreen mode Exit fullscreen mode

Next, we need to add the kustomization.yaml and make sure they reference the right resources.

Let's first have a look at the the kustomization.yaml in clusters/cluster-1/apps/app-rw and clusters/cluster-1/apps/app-rt setup.
Both app-rw and app-rt will have a root kustomization.yaml which will point to all applications deployed onto the cluster. In our example, this is only the nginx app.

Folder structure:

clusters/cluster-1
├── apps
│   ├── apps-rw
│   │   ├── kustomization.yaml
│   │   └── nginx
│   └── apps-rt
│       ├── kustomization.yaml
│       └── nginx
└── infra
Enter fullscreen mode Exit fullscreen mode

The kustomization.yaml files:

#clusters/cluster-1/apps/apps-rw/kustomization.yaml
resources:
  - ./nginx
Enter fullscreen mode Exit fullscreen mode
#clusters/cluster-1/apps/apps-rt/kustomization.yaml
resources:
  - ./nginx
Enter fullscreen mode Exit fullscreen mode

Going one level deeper, both the nginx under clusters/cluster-1/apps/app-rw and clusters/cluster-1/apps/app-rt have a similar setup.
To not go over the same thing twice, we are going to only have a look at the clusters/cluster-1/apps/app-rt. To see the setup of the app-rw you can check the sample here.

Folder structure:

clusters/cluster-1
├── apps
│   ├── apps-rw
│   └── apps-rt
│       ├── kustomization.yaml
│       └── nginx
│           ├── namespace.yaml
│           └── kustomization.yaml
└── infra
Enter fullscreen mode Exit fullscreen mode

The kustomization.yaml files:

#clusters/cluster-1/apps/apps-rt/nginx/kustomization.yaml
resources:
  - ./../../../../../apps/nginx/nginx-rt
  - ./namespace.yaml
Enter fullscreen mode Exit fullscreen mode

As shown above, the application resources referenced under clusters/cluster-1/apps/apps-rt are the resources we bundled up under apps/nginx/nginx-rt and should now only contain resources that can be changed in real-time.

And just like that you have separated all configurations to be managed by different Kustomization resources!

Set up Kustomization resources.

Our GitOps repository is ready now, but how do we set up the Kustomization resources?
Let's first create a flux Source resources.

flux create source git source \
    --url="https://github.com/<github-handle>/flux-reconciliation -windows-sample" \
    --username=<username>\
    --password=<PAT> \
    --branch=main \
    --interval=1m \
    --git-implementation=libgit2 \
    --silent
Enter fullscreen mode Exit fullscreen mode

Next, we now need two controllers for apps and one for infra.

flux create kustomization infra \
    --path="./clusters/cluster-1/infra" \
    --source=source\
    --prune=true \
    --interval=1m
Enter fullscreen mode Exit fullscreen mode
flux create kustomization apps-rt \
    --depends-on=infra \
    --path="./clusters/cluster-1/apps/apps-rt" \
    --source=source\
    --prune=true \
    --interval=1m
Enter fullscreen mode Exit fullscreen mode
flux create kustomization apps-rw \
    --depends-on= apps-rt \
    --path="./clusters/cluster-1/apps/apps-rw" \
    --source=source\
    --prune=true \
    --interval=1m
Enter fullscreen mode Exit fullscreen mode

Not this should give you something like this.

user@cluster:~$ flux get kustomization
NAME    REVISION        SUSPENDED READY MESSAGE
infra   main/7cf3aaf  False     True  Applied revision: main/7cf3aaf
apps-rt main/7cf3aaf  False     True  Applied revision: main/7cf3aaf
apps-rw main/7cf3aaf  False     True  Applied revision: main/7cf3aaf
Enter fullscreen mode Exit fullscreen mode

Demo

Now that the cluster is set up, we can upgrade the nginx version and change the configuration nginx.conf to include the nginx_status endpoint and see how one is visible right away, while the other needs a reconciliation window to open.

1. Initial state

Before we do any changes, we can check out the current state of the nginx deployment.
Get the public ip address of the machine you are running your cluster on and navigate to the http://<ip>:8080/ we should see somehing like this.

Note: if you are running it locally you can replace the ip with localhost

 raw `Nginx` endraw  landing page

We can download the nginx.conf file by clicking on it and see what configuration is currently mounted into the nginx pod from the ConfigMap.

2. Change state

The next step is to change the state of our application.
To change the state of the application we can change the image version number from 1.14.2 to the (currently) newest image 1.23.3 inside the apps/nginx/nginx-rw/deployment.yaml. And in the same commit, we can add the configuration shown below to the nginx.conf section in the apps/nginx/nginx-rt/configmaps.yaml file to include the new status endpoint.

location /nginx_status {
                stub_status;
                allow all;
            }
Enter fullscreen mode Exit fullscreen mode

3. See real-time changes

Now if we go back to the browser, refresh the page and re-download the file nginx.conf, we should see the new section we just added.

Note: It might take up to 2 minutes in the worst case for the Source and then Kustomization resource to reconcile

4. Wait for reconciliation window to open

If we now wait till the next reconciliation window opens, the pod should be restarted and we should be able to see the version either by checking the resource.

kubectl describe pod  <nginx-podname> -n nginx
Enter fullscreen mode Exit fullscreen mode

Or if you don't want to access the machine directly you can go to a non-existing route in the browser eg http://:8080/settings/. There you should see a standardnginx` 404 page which contains the current deployed version at the bottom.

Conclusions

Let's summarize what we did when it came to restructuring the repository.

  1. We separated all application resources into two sub-versions. One for resources which can be changed in real-time and one for resources that can only be changed when a reconciliation window is open.

  2. We split the clusters directory in such a way, so that we can create two independent Kustomization resources, which reference either one or the other application sub-version.

After this we could create the infra and the two apps Kustomization resource and start using the solution, as demonstrated.

So, at its core it boils down to separating the resource definition, in such a way that they are only managed by one of the Kustomization resources created. This can be done like it's shown above, or slightly differently to fit your needs.

But hopefully after this second part, you should be good to go on using these reconciliation windows and have the knowledge on how to tweak the setup to fit your use case :)

Top comments (0)