DEV Community

Cover image for Yet another rant in favor of Kubernetes Secrets
Lucas Severo Alves
Lucas Severo Alves

Posted on • Edited on

Yet another rant in favor of Kubernetes Secrets

Translation to pt-br at: https://knela.dev/blog/mais-um-desabafo-em-favor-dos-segredos-do-kubernetes

Back in the day, Kubernetes secrets had many engineers scratching their heads, like "Is this a joke"? Secrets are implemented in base64 encoding which is not an encryption system at all but a simple encoding. Of course, anybody could then decode it. I must admit I was one of those engineers. My opinion changed, but I can understand the feeling most people have with regard to these objects, however wrong this premise is.

Photo by Sai De Silva on Unsplash

Why are Kubernetes secrets base64 encoded? (And why that's ok)

The base64 encoding of Kubernetes secrets was never intended as a security measure. It does not exist to obscure or hide any values. The decision to implement this encoding method was initially made to enable the secrets to accommodate binary data. If the secrets were structured as simple maps, perhaps there would be less confusion. However, this isn't the point. When observers notice the base64 encoding, they often assume it to be a flawed attempt to achieve the protection of a value, even going so far as to regard it as a mistake. This interpretation, as we just saw, isn't accurate.

Image description

How to make Kubernetes Secrets secure?

Initially, RBAC (Role Base Access Control): Restrict users or teams that have access to secrets, or block that access completely. Give access of a specific secret only to service accounts that need it.

If you have any policy engine: Include policies that won't allow secrets being globally accessible. Come up with policies that make sense with your context, blocking or alerting when someone is violating your guidelines. Maybe you just want to avoid teams having access to other team's secrets. Maybe your control is more fine grained, and that is for you to figure out.

Encrypt etcd at rest: This is the method to safeguard secret values stored within etcd and you shouldn't worry about it being easy to decode when you get the value. When you retrieve a value from kube API server with curl or kubectl, and it feels odd due to it merely being base64, pause to consider. When you are authenticated and authorized to do so, isn't obtaining a secret from a source you deem secure essentially the same? For instance, if you curl Hashicorp Vault (or Azure Key Vault, or any other equivalent), armed with your proper keys and certificates, isn't the response plain text (I know it is obvious, I think we forget about this)? The value needs to be in plain text at some point, to the consumer ā€” to the application that will use that credential ā€” to do something. Our primary objective should be to ensure the secure storage and transmission of these values (though the latter won't be the focus in this article). We shouldn't concern ourselves unnecessarily with the data being displayed in plain text, provided the request comes from an authenticated and authorized entity that requires this information.

Photo by Pete Alexopoulos on Unsplash

Encrypting etcd alone is not enough

Most guides that teach you how to encrypt etcd at rest will get you into a predicament. Encryption keys, in a lot of cases, are still stored in the same machines where your control plane components are running. If you consider an etcd instance to be compromised, and let's say the attacker has access to files on that machine, it is super simple to just decrypt the whole database.

The preferred method of encrypting etcd values is by using a Key Management Service. Here we have some extra levels of indirection and also another layer of enforcement on authorization. You store the encryption key, encrypted, in a remote service. And only authorized services or users can access and decrypt the key and then the data. This can still be mimicked by an attacker, and reverse engineered into what they need, but it is still the recommended method as of this writing (you can always come up with more indirections, but complexity can also impact negatively security incident response, so better stick to supported approaches).

A few other adjacent measures will be listed at the end of the article.

Photo by Ariel on Unsplash

Who uses Secrets?

Kubernetes šŸ˜‚. Your control plane, right now, is using Kubernetes Native Secrets, and those secret values are of course stored in etcd. It does not matter the flavor of Kubernetes, which cloud provider. Attackers with access to those secrets can already cause a lot of damage if you don't protect those secrets.

My reasoning for this is as follows: if you believe Kubernetes Secrets should be avoided, does this mean you are also discarding the secrets that inherently come with using Kubernetes? You are probably not doing that. It might be more pragmatic to acknowledge their existence and strategize ways to secure them, along with your other secrets, within this context.

Furthermore, Cloud Native tools from the Cloud Native Landscape will likely anticipate your secret values to be Kubernetes Native Secrets. If you wish to leverage the plug-and-play capabilities of the Kubernetes Ecosystem, avoiding secrets could lead to unnecessary difficulties. This choice would probably result in an excessive reliance on custom in-house solutions, which not only increase complexity but also introduce additional security risks.

Photo by Kirk Thornton on Unsplash

So, etcd is compromised

Let's try to explore a few premises and scenarios with regard to avoiding secrets.

The relevant assumptions when avoiding Kubernetes Secrets

The biggest argument against Kubernetes Secrets comes from the fact that if etcd is compromised, you basically have access to any secret value on it. And I can see how that can be very dangerous. But I would argue this is only relevant in a very unrealistic scenario:

  • Attacker has only read access to etcd (biggest gotcha in my opinion)
  • Attacker has encryption key (or etcd is unencrypted)
  • (Optional) Attacker has kubeconfg/credentials of a dev with limited permissions
    • can't pod exec into a container
    • can't create pods in important namespaces
    • can only get a few resources, logs, and some other minor permissions
  • Cluster is updated (above 1.24), and does not have any legacy token secrets for service accounts

If this scenario came to fruition (an attack happening exactly like this), avoiding Kubernetes Secrets in fact ended up protecting your environment. That's good.

So, why do I say this is an unrealistic scenario? Have you ever heard of a security breach where an attacker got only read access to etcd, without write? This would require an attacker to get access to an etcd instance or endpoint, and have access to credentials of an user that only has read (but they have the encryption key). Another option could be: if the attacker has access to the instance and for some reason they have access only to the encryption key and not the credentials. In my experience, and opinion, this is not the scenario that actually happens. When etcd is compromised, it is completely compromised (I will touch on what happens when etcd is completely compromised later in this article).

The other requirements of this scenario are also important. Avoiding secrets would not help if ANY of those are not true. If an Attacker has pod exec into any pod consuming a secret value (being Kubernetes Native Secret or not) that attacker now has a secret value (if they can grab a shell ā€” if this is not protected or you are not using distro-less). Similarly if attackers can create pods using whatever k8s-secret-avoiding-mechanism you came up with, with a secret value mounted, again, they got a secret.

Last but not least, if you are on Kubernetes <1.24, service accounts used to store tokens in secrets (not using TokenRequests). Whichever k8s-secret-avoiding-mechanism you might been using, probably used service account tokens or other credentials to talk to external Vaults. An attacker with access to a compromised etcd could just use that token to get a remote secret.

Given that all these factors must align for avoiding Kubernetes Native Secrets to bear significance from a pure secret value protection standpoint, I would consider it niche, and not really relevant to the majority of setups around the globe.

Image description

The non-arguments

Previous scenario is a good example of when you are actually protected by not having Kubernetes Native Secrets, even though it is unrealistic. These other ones listed here are, in my opinion, not good arguments against using Secrets:

  • Saying you can get the value easily decoded through the kube api-server: That's also easily protected by RBAC. And even considering any other way to mount a secret value into a pod, through an unprotected API you can still pod exec into it and get secret values.

  • Saying Secrets as Env vars are easily inspected: Kubernetes Native Secrets can be mounted as volumes.

  • Saying that compromises that are not etcd related could also leave secrets exposed: The risk still exists with any other k8s-secret-avoiding-mechanism. Root access to nodes with RAM dumping or memory scan, network sniffing, or anything like that. As we said before, applications have the final plain text secret because they need it like that. A memory scan (or similar breach), due to its intrusive nature, could reveal secret values regardless of whether they are Kubernetes Native Secrets or not.

Photo by Eugene Chystiakov on Unsplash

The write on etcd

If an attacker has write on etcd (and access to any kubeconfig even with a role without permissions), they have you. It does not matter if you are avoiding Kubernetes Native secrets. It does not matter if you are using complex policies with popular policy engines.

A few ways to get secrets (or almost anything) with write on etcd:

  • Insert a RoleBinding with any privileged role to get access to pods with secret values mounted.
  • Insert a pod or Deployment directly through etcd with inline secret value mounting.
  • If a pod has a service account that lets it get secrets from a remote Vault: Insert a pod in etcd and make it use the same service account that can talk with the external Vault and write a simple code to call the TokenRequest API. This would also work on Kubernetes >1.24. You can log the token and get all secrets before it expires.
  • There are a few other more complicated ways to get secrets even if the attacker does not have access to kube api-server. Shifting roles and role bindings around, getting privileged pods inside the cluster, acting and getting responses all from etcd. It is harder, but possible.

Photo by Jon Tyson on Unsplash

So, nothing is secure?

That is not the point. The point is that you have very similar threat models while avoiding Kubernetes Native Secrets and while not avoiding them. So, with similar threat models, the best option is the simplest option. To just use what you are used to, and figure out how to protect that.

Photo by Piret Ilver on Unsplash

So..., we don't have any reason to avoid Kubernetes Secrets?

Now, that is the right question! Do you? There are plenty of reasons to avoid Kubernetes Secrets. Maybe you have a company policy to follow. Perhaps you have to adhere to multiple types of compliance. Maybe the complexity of avoiding Kubernetes Native Secrets is worth it in your case, because it will make your life easier during auditing and some other recurring processes.

My point is more on the side of: Avoiding Kubernetes Secrets is not intrinsically more secure all the time. On the other hand, hashing passwords when storing them into a database table is, and it is simple enough to always be worth it, and this is clearly not the same thing. You HAVE to check your requirements, and see what works for you, in this case.

Photo by Jay Heike on Unsplash

What is the best way to protect your secrets then?

A few of the best practices were listed in the beginning of the article when we talked about RBAC, policies and encrypting and protecting ETCD. But there are other common security practices that are maybe even more important:

  • Avoid initial attack vectors and using vulnerable software/dependencies (supply chain security). If your frontend has a vulnerable lib that lets people inject arbitrary code and somehow grab a shell of that frontend container. Then if somehow they can escape the container and escalate privileges to root, and then do a memory scan... This person has all your secrets on this node. It does not matter if you are avoiding Kubernetes Native Secrets or which solution you are using for synchronizing your secrets. Besides avoiding vulnerable software/dependencies, there are a bunch of other ways to protect yourself, like phishing training for non technical employees, WAF, and many others.

  • Be mindful about Access Management all around. Not only RBAC on Kubernetes, but also to your cloud accounts, sub-accounts and your whole infrastructure.

  • Design good identity and security boundaries for your system. Isolate tenants or teams of your company in a way that makes sense with regard to their daily work, your software architecture and your team structure, giving permissions and accesses only to people that need it.

I know, these sound generic and basically best practices to protect your system in general. My point is that exactly. You have to do that first.

eso

Additionally:

  • Make sure your approach takes into consideration security response, and not only prevention. It is not only about protecting secrets. It also about what to do when (not if) they leak.

  • Being one of the maintainers of External Secrets Operator (ESO) and getting most of the general advice out of the way, I also want to give you some reasons why ESO helps with security and facilitates your processes and I think you should also consider it (or some other tools that also deliver on some of these points):

    • you can give your developers only references to secrets, and not secret values directly.
    • ESO integrates into developers GitOps processes facilitating adoption and debugability.
    • it facilitates the toll of auto-rotation and secret generation, helping with prevention and response.
    • ESO already considers multi-tenant and multi cluster setups, then if that's your case you don't have to improvise
    • ESO can use another service-account identity, per provider configuration, so you don't have to give permissions to the operator pod on all your providers. This also allows the pod itself to not have any permissions to the provider, just a detached service account.
    • Accepting Kubernetes Native Secrets as the final destination to secret values simplifies your solution (most of the time). Complexity will mostly hurt the security of your system.

Summarizing everything

The understanding and proper management of Kubernetes secrets are critical, and base64 encoding, often misunderstood as a weak security measure, is intended to handle binary data. Using RBAC and policies, and encrypting etcd at rest are ways to make secrets secure. However, the total avoidance of Kubernetes secrets is not necessarily safer and could add complexity. Always evaluate your specific needs to choose the right approach. Focus not only on prevention but also on effective response strategies for potential leaks. Tools like the External Secrets Operator (ESO) can assist in securing Kubernetes secrets, especially by making it easy to perform rotations. Essentially, system security requires a comprehensive view rather than focusing only on individual elements.

References:

Unfortunately the reasoning behind base64 encoding on secrets is not shared on the documentation, or PRs/issues, as this of course feels obvious to core Kubernetes developers. However, Jordan Liggitt, one of Kubernetes top contributors, shares an answer here:

With regard to encryption at rest and KMS:

Considering my statement about any Kubernetes Control Plane using secrets to work under the hood: If you are an admin you can just kubectl get secrets -A within a fresh provisioned cluster to see all the secrets being used. A few managed solutions won't even hide some from you. If it's self managed you are going to see hundreds of them.

Given my opinion and experience on the read/write aspect of etcd during incidents. I would be curious to hear about other people experiences. Maybe I am completely mistaken here. Feel free to reach out!

It is relevant to also have a look over documentation for Secrets and RBAC on official docs:

Looking back at the part we talked about injecting pods or other resources into etcd directly, here you can see a simple step by step example of injecting a pod:

Even though I don't necessarily agree with all statements on the following article (as I still think it is worth it to use external providers), this was one of the blogs that resonated with me some time ago when we were having these kinds of discussions within the community: https://www.macchaffee.com/blog/2022/k8s-secrets/, written by Mac Chaffee. I get some analogies and threat modeling way of thinking from it.

Finally, since we mentioned ESO, I would recommend looking at ESO's docs and the talk given at Kubecon:

And would also like to thank the whole external-secrets community for trusting ESO when handling their secrets! :D

As a follow up blog post I'm thinking about sharing some lab examples of how to get secrets in multiple scenarios, and how to protect from those attacks. Please let me know if that would be interesting to you, as it would be a lot of work, and I can do it on demand! šŸ˜

Top comments (2)

Collapse
 
bcouetil profile image
Benoit COUETIL šŸ’«

High quality content, thank you šŸ™

Collapse
 
canelasevero profile image
Lucas Severo Alves

Thanks for the feedback!