Everyone knows it: granting privileges is always a balance between security, usability and maintenance effort. If permissions are granted very generously, the effort is very low and there are rarely any hurdles to use; however, security is compromised. If permissions are granted sparingly, security is higher, but there are costly processes and a lot of administrative overhead.
Kubernetes offers many possibilities with its Role-based access control (RBAC), which are also extensively documented (https://kubernetes.io/docs/reference/access-authn-authz/rbac/). Unfortunately, however, there are not many practical tips for actual implementation. To break out of this predicament, we have written plugins that allow you to use the Kubernetes-sudo context to get a simple but effective entry point for managing permissions. In doing so, this blog article illustrates how to install the plugins and configure the cluster using a managed Kubernetes from Google Cloud Platform as an example.
Permissions of developers in the cluster
The solution presented here refers to permissions of humans, since applications should basically only have read-only access to their own secrets and the configmap in the cluster. So this is straightforward from the point of view of assigning permissions. However, when it comes to permissions for developers, it becomes much more complex, because people’s tasks and roles can change over time and because permissions can be used to protect people from making careless mistakes. This results in the high maintenance effort mentioned at the beginning.
A solution with minimal maintenance effort is to give all developers the same, extensive authorizations. However, this creates the risk that harmful changes can be made accidentally at any time. Especially in productive environments, this can lead to critical downtimes and even data loss.
That’s why we decided to take an approach that briefly uses additional privileges to execute commands, similar to the sudo command in Linux.
Implementing the least-privilege approach using sudo-context
To implement sudo-style permissions, these things must be implemented:
- Using the impersonate feature of Kubernetes
- Setting up the sudo context
- Granting permissions in the cluster
We will now describe these steps in detail.
Setting up the impersonate feature
The sudo function is Kubernetes’ internal “impersonate” feature of the Kubernetes API. It allows commands to be executed as a different user, group, or service account. The first step is to enable the sudo function in the cluster. There are different ways to do this depending on the use case. How these are finally installed and configured on client and cluster side follows in the following.
kubectl-sudo plugin
With the kubectl-sudo plugin, kubectl commands that require more extensive rights can be executed explicitly as a member of the admin group. This reduces the chance of accidentally modifying or deleting resources on the cluster, for example when running scripts or being in the wrong namespace.
The plugin only works for kubectl, but other tools that use kubeconfig (helmet, fluxctl, k9s, etc.) cannot use it. Here is a simple example of how to use the plugin kubectl sudo get pod
.
helm-sudo plugin
In the Kubernetes environment, Helm charts are very important. So, to be able to use the functionality for Helm as well, we have developed a corresponding plugin that can be used analogously to kubectl-sudo. Analogous to the usage of the kubectl-sudo plugin, here is an example for the helm plugin helm sudo list
.
sudo context for other tools
Alternatively and for all other tools like fluxctl or k9s there is the possibility to create a sudo context in kubeconfig. This can then be used as follows:
kubectl --context SUDO-mycontext # alternative to kubectl-sudo
kgpo --context SUDO-mycontext # also works with aliases!
helm --kube-context SUDO-mycontext
fluxctl --context SUDO-mycontext
k9s --context SUDO-mycontext # Changes also in k9s possible ":ctx"
If auto-completion features have been installed for certain tools, they will automatically detect the available contexts and can be selected with Tab
.
Attention: When using the sudo context, always make sure to specify the namespace in which a command is to be executed. By default, only the current namespace is stored in kubeconfig when setting up the context. If a command is to be executed in a different namespace, this must be explicitly specified with a parameter: kubectl --context SUDO-mycontext --namespace mynamespace get secret
The sudo context should only ever be passed as a parameter. It should never be set as the active context, as this would grant permanent admin rights and thus undermine the protection against accidental changes. This is analogous to the sudo su
command under Linux, with which a user has all permissions and is not stopped from performing risky actions.
However, both kubectl sudo
and helm sudo
do not require the namespace to be specified each time. Here the commands are always executed in the current namespace of the current context. Therefore, for helm and kubectl, the use of the sudo plugins is preferable.
Setting up the local tools
To create a sudo context this script is available: wget -P /tmp/ "https://raw.githubusercontent.com/cloudogu/sudo-kubeconfig/0.1.0/create-sudo-kubeconfig.sh"
With it, only these steps are necessary to interactively create a kubeconfig for the currently selected context:
chmod +x /tmp/create-sudo-kubeconfig.sh
/tmp/create-sudo-kubeconfig.sh
kubectl --context SUDO-mycontext get pod
If needed, the two plugins already mentioned, kubectl-sudo and helm-sudo, can be installed via bash:
Optional: install kubectl-sudo
sudo bash -c 'curl -fSL https://raw.githubusercontent.com/postfinance/kubectl-sudo/master/bash/kubectl-sudo -o /usr/bin/kubectl-sudo && chmod a+x /usr/bin/kubectl-sudo'
kubectl sudo get pod
Optional: install helm-sudo
helm plugin install https://github.com/cloudogu/helm-sudo --version=0.0.2
helm sudo list
Technical realization of the authorization
Now that the prerequisites for using the sudo function have been created on the local computer, the authorizations must be set up. We will show the steps for the technical realization as an example using a managed Kubernetes of the Google Cloud Platform.
RBAC
With the sudo function, we can now slip into the role of users, groups and service accounts to execute commands (impersonate). In order for us to get broader rights through the “impersonate”, they must be authorized using Kubernetes’ Role-based access control (RBAC).
The “impersonate” is implemented by:
- kubectl-sudo:
kubectl --as=$USER --as-group=system:masters "$@"
-
helm-sudo:
helm --kube-as-user ${USER} --kube-as-group system:masters "$@"
- and create-sudo-kubeconfig.sh:
as-groups: [ system:masters ]
In an existing k8s cluster, two resources must be created to give users access to the impersonate feature:
- A ClusterRole, which allows to use the impersonate feature.
- A ClusterRoleBinding to allow individual users or groups to use the previously created ClusterRole.
# sudoer.yaml
# Creates a ClusterRole which allows to impersonate users,
# groups and serviceaccounts
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: sudoer
rules:
- apiGroups: [""]
verbs: ["impersonate"]
resources: ["users", "groups", "serviceaccounts"]
# cluster-sudoers.yaml
# Allows users to use kubectl sudo on all resources in the cluster
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-sudoers
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: sudoer
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: admins@email.com
- apiGroup: rbac.authorization.k8s.io
kind: User
name: user1@email.com
In the current ClusterRoleBinding, everyone listed there has sudo permissions in all namespaces. It is also possible to use multiple ClusterRoleBindings and thus have permissions only for certain namespaces. This is a good approach if, for example, different teams have separate namespaces. To do this, another attribute namespace: namespace: namespace-name
must be listed in the ClusterRoleBinding under metadata
.
At first glance, it may look like anonymous changes to the cluster are now possible because a different role is assumed. However, in the audit logs in the Google Cloud Platform, the actual user principal who made a change can be seen. This means that it is possible to trace which user made a change. This works similarly in managed clusters of other cloud providers.
Google Cloud Platform (GCP)
In order for RBAC and the adjustments just described to be effective in the GCP at all, the original authorization per Google Cloud must first be bypassed. People who have the role Owner
, Editor
or Kubernetes Engine Admin
in the GCP are normally allowed to execute everything in the cluster, even if they are not explicitly permitted by RBAC.
Under IAM, therefore, a custom role must be created once in the GCP that only allows authentication to the Kubernetes cluster. This role, which we call “Kubernetes Engine Authentication”, is assigned the following permissions:
- container.apiServices.get
- container.apiServices.list
- container.clusters.get
- container.clusters.getCredentials
This role now gets assigned to all users that need access to the cluster. The other permissions are then assigned by RBAC in the cluster. The role can also be assigned to entire groups. The groups are again managed via the GSuite groups. However, to do this, propagation of groups must be enabled when the cluster is created (Google Groups for RBAC). Unfortunately, this setting cannot be activated retrospectively for an already existing cluster. For this purpose, it must be rebuilt.
Conclusion
By using RBAC and sudo context, the effort for maintaining permissions and security are in a good balance. On the one hand, there is no need to maintain permissions for each person and on the other hand, the risk of unwanted changes, e.g. because one is in the wrong namespace, is significantly reduced.
Let’s take this scenario: A developer is testing changes to a deployment in his local dev cluster. During the course of the workday, minor work is done on the productive cluster in the GCP. Now the developer forgets to switch back to his local context and wants to delete the test deployment at the end of the day. Something like this has certainly happened to some people before. This can lead to downtimes or in the worst case to data loss.
However, if changes can only be applied to the production cluster using the sudo context or SUDO plugin, the accidental deletion will fail and the developer will notice his mistake. So, the vulnerability to accidental errors becomes less while maintaining ease of use, ease of implementation and high security. Since we have been using RBAC and the associated sudo context ourselves at Cloudogu, we have been working much more securely on our Kubernetes clusters.
Top comments (0)