Since day 0 I started to administrate Openshift last year, one of the core thing I wanted we tackle with the team was how to manage all customization we would have to do. I'm used to automation software, from bare shell scripts (yes this is automation :) to things like Puppet, CFengine or Ansible and I can't imagine now managing an application, a fleet of nodes, a cluster without being able to automate the deployment of the configuration.
So I share here - not in details - what we did in our team. I'll split the explanation in a couple of posts. This is quite simple but effective, we started recently to run regularly this job from AWX in check mode to observe configuration drift and may run it in run mode sooner or later.
We started to make some status & requirements with the colleagues (this is ALWAYS important to do so)
- We will administrate several clusters.
- A big part of the clusters configuration will be the same for all clusters.
- We must be able to deploy for each cluster some specific manifests
- The same manifests will deployed on several cluster but may differ slightly (quota values for instance), and we don't want to store all these manifests just for the sake of different values inside. So we must be able have placeholder in the manifests which would be filled with each cluster values.
- We must be able to add manifests but also to remove manifests from the cluster.
Here is an example of the structure we choose
project-dir ├── config │ ├── common.yml │ ├── dev.yml │ └── prod.yml └── manifests ├── common │ ├── 05_authentication │ ├── 10_feature_foo │ └── 30_feature_baz ├── dev │ ├── 40_feature_dev_buz │ └── 60_feature_dev_buu └── prod ├── 50_feature_prod_bar └── 70_feature_prod_boo
config contains a yaml file for each cluster, with the connection details and all specific variables, the file
common.yml contains common and default variables.
# prod config file connection: - url: https://prd.prj.domain.tld token: !vault.... ldap_connection: - bind_user: cn=ldap-prd-user,... bind_password: !vault... authorized_groups: - group-dev-1 - group-dev-2 - group-dev-3 # other prod specific variables
We think after about a naming convention for the manifest files, not that it would enforce the content in the file (exception made to the
status field), but to permit to easily spot a manifest file on the disk. We came up with convention
XXis a number to which help applying manifest in a specific order.
kindis the kind of the manifest (
nameis the value of the file
metadata.nameof the object to be modified.
namespaceis the namespace the object belongs to. If the object is not namespaced, like namespace or clusterrole, we set the value to
statushas either value
presentto ensure it exists or
absentto ensure it does not exist. This value in the name of the file will be to the module
k8sand will enforce the state of the manifest.
To better understand how it works, let have a look
- a manifest file to create a namespace
foobarwould be named
- a deployment named fluentd in namespace
cluster-loggingwould be named
- a manifest to delete a
dev-hideoutwould be named
Now we had an overview of the structure of the project, the next post will explain the ansible playbook.