Kubernetes Secrets are where Kubernetes stores secret objects such as passwords, OAuth tokens, sensitive data, and SSH keys. It is stored in the Kubernetes controller and is kept separate from the pods where your application runs.
secret is a bit misleading, as it is not a secret. It is just base-64 encoded and stored in etcd.
Besides, following GitOps processes for your Kubernetes application works great for most resources, but it has some limitations when it comes to managing and storing secrets. Storing your secret data with Kubernetes Secrets on Git is uncontrolled for access and is a security risk.
Let's look at the limitations of Kubernetes Secrets and how to overcome them.
Limitations of Kubernetes Secrets
etcd is not secure - etcd is where Kubernetes secrets are stored. Though etcd is a distributed key/value store with great performance, it lacks key features when it comes to handling sensitive data such as audit log, key rotation, and encryption of key.
Secrets as plain text - When a pod needs to access the secrets, it is provided by Kubernetes as environment variables or mounting them as files containing plain-text secrets. These secrets become accessible to everyone with access to the pod. Further, there is an increased risk of the value getting exposed in places like debug logs. It becomes difficult to ensure only certain nodes or containers get access to specific secrets.
RBAC functionality - Kubernetes’ RBAC functionality provides only get and set permissions for secrets. And while you get a secret, you only get its decrypted value. A more secure zero-trust setup would allow a developer to set a secret and then only retrieve the encrypted value for consumption. This is not available natively in Kubernetes.
The lack of secure encryption - Since Kubernetes secrets are just base-64 encoded, it is possible for anyone to decode it once they get their hands on the secrets. A simple kubectl command, as shown below, can create a secret from your existing text file and is stored as a secret manifest. Having the secret manifest next to another deployment and service manifests in Git isn’t secure.
kubectl create secret my-secret --from-file=./argo-secret.txt
The created secret manifest would look something like this:
apiVersion: v1 kind: Secret metadata: name: my-secret data: username: TmV3QXJnb25hdXQ password: QXJnb1B3ZA type: Opaque
Kubernetes secret management solutions
Considering the major limitations of using Kubernetes Secrets, there are many new approaches being developed by the Kubernetes community. Kubernetes SIGs like the Secrets Store CSI Driver and solutions like the external secrets operator that works with third-party secret managers, and options to seal secrets through tools like bitnami’s sealed-secrets. To skip the tools and move directly to best practices, click here.
All these take different approaches; let’s learn more about them.
Sealing secrets is very GitOps friendly enabling users to check-in encrypted secrets into Git using bitnami’s sealed-secrets tool. The tool matches the cluster’s namespace/name to ensure that the sealed secret can only be decrypted at the right location. The tool is made to be automation and GitOps friendly.
Once you install it using Helm chart, Homebrew, or the many other methods available, the tool can be accessed right from your CLI using the
kubeseal command. Creating a new secret can be done by feeding secrets as raw text, feeding a JSON/YAML file containing the secret as input, or by using
kubectl create secret --dry-run. The output encrypted file is safe to share publicly. This makes it easier to upload into your Git with the rest of your configuration.
Within your cluster, the Sealed Secret controller decrypts the secrets and makes them easily accessible, just like a Kubernetes Secret.
Additional features like the Prometheus metrics exposed by the Sealed Secret controller expose metrics on errors like a wrong key, corrupted data, and other RBAC permissions. You can also bring your own certificates and use one controller for multiple namespaces.
The default period for the encryption key renewal is 30 days, which can be changed or set to an earlier date. The keys can also be stored in a backup location.
It is important to note that anyone with access to your cluster will be able to see the decrypted secrets. Thus, make sure there are no malicious users accessing your cluster.
This approach takes the secrets from your external secret stores, such as AWS Secrets Manager, Google Cloud Secret Manager, Azure Key Vault, Hashicorp Vault, etc, and makes them available to your Kubernetes application. This is made possible by the External Secrets Operator - an open-source Kubernetes operator that reads information from external APIs and automatically injects the values into a Kubernetes Secret.
External Secrets Operator (ESO) is a collection of custom API resources -
ClusterSecretStore that provide a user-friendly abstraction for the external API that stores and manages the lifecycle of the secrets for you.
ExternalSecret - It is a declaration of what data has to be fetched from your external secret manager. It references a
SecretStore, which knows how to access the data. You can do more things like refresh interval can be set, specify a blueprint for the resulting
Kind=Secret, use inline templates to construct the desired config file containing your secret, set a target that will be created, and creation and deletion policies. Refer to this example for implementation steps.
SecretStore - They are namespaced by design and cannot communicate with resources across namespaces. This is the file where you select your ESO controller, cloud provider along with role and access IDs, and retry settings in case of connection failure.
The secrets are fetched from an external vault and made available in SecretStores, these are limited to that particular namespace. For use in multiple namespaces, use the
ClusterSecretStore - This is a cluster-scoped secret store and can be used as a central gateway to your external secret store. All ExternalSecrets can reference secrets stored here from all namespaces.
Mounted Secrets (CSI volume)
There’s a new Secret store Container Storage Interface (CSI) driver in development by one of the Kubernetes’ SIGs. This approach employs ephemeral volumes to store secrets, which get mounted onto the application pod. The Secrets Store CSI driver enables extension through providers. Its two main components are the Secrets Store CSI Driver and the SecretProviderClass.
Secret Store CSI driver is a DaemonSet that facilitates communication with every instance of Kubelet. A provider is launched as a Kubernetes DaemonSet alongside Secrets Store CSI driver DaemonSet. The driver communicates to the provider using gRPC to fetch and mount contents from the external secret store. The providers currently supported are AWS, GCP, Azure, and Vault.
The SecretProviderClass is a namespaced custom resource in the Secrets Store CSI driver used to provide driver configurations and provider-specific parameters to the CSI driver. Remember that it needs to be created in the same namespace as the pod.
apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-provider namespace: default spec: provider: azure parameters: usePodIdentity: "false" useManagedIdentity: "false" keyvaultName: "$KEYVAULT_NAME" objects: | array: - | objectName: $SECRET_NAME objectType: secret objectVersion: $SECRET_VERSION - | objectName: $KEY_NAME objectType: key objectVersion: $KEY_VERSION tenantId: "$TENANT_ID"
If you’re using secret management tools like Doppler or Vault, you can directly inject secrets at runtime.
Doppler does this by embedding the Doppler CLI in your Dockerfile. You then have access to Doppler secrets from your production or CI/CD environments through a Service Token which provides read-only access to a specific config via the
DOPPLER_TOKEN environment variable.
apiVersion: apps/v1 kind: Deployment --- spec: containers: - name: your-app envFrom: # envFrom exposes `DOPPLER_TOKEN` value as an environment variable - secretRef: name: doppler-token
Vault users are able to install and configure a Vault Agent as a sidecar alongside their applications. Vault Agent is a client daemon with features like Auto-Auth, Lifecycle management of tokens, Templating, Caching, and windows service. Once the trust is established, subsequent transfer of secrets becomes easy.
The Secretless broker is an open-source, independent, and extensible project maintained by CyberArk. It is designed to work seamlessly with Kubernetes, providing a secure and convenient way for applications to authenticate without the need for the developer to fetch, handle, or embed secrets in the application's code. When an application needs to access a secret, it sends a request to the Secretless broker, which is responsible for fetching the secret from a vault (Conjur secrets) and returning it to the application in a secure manner.
Secretless is typically deployed as a sidecar container that shares a trusted network stack with your application container. It is this trusted sharing that enables your application to make connections through Secretless. It is publicly available as a container image on DockerHub.
There is only a limited number of secret providers and service connectors currently supported.
Secrets using SOPS
Mozilla’s Secret Operations (SOPS) is an editor of secret files. It supports YAML, JSON, Binary, ENV, and INI formats and encrypts with an encryption key from your AWS KMS, GCP KMS, Azure Key Vault, age, and PGP.
ArgoCD users would have to build container images with SOPS baked in using Helm chart extensions or Kustomize extensions. Flux allows configuring sops directly into the Flux manifests.
List of tools and the approach they take
- external-secrets.io (external)
- AWS/GCP Secret manager & Azure key vault (external)
- sealed-secrets (sealed)
- Vault (mounted, external, & inject)
- Conjur (secretless)
- ksops (sops)
- sops-secrets-operator (sops)
Best practices for Kubernetes secret management
Ensure encryption at rest. Storing unencrypted secrets in
etcdcould lead to compromise and allow access to your systems. Following one of these methods to encrypt data at rest or storing the secrets in an external store makes your system more secure.
- Limit access to Kubernetes clusters. This can be controlled using Kubernetes RBAC controls and RBAC controls from your cloud provider. Since, in methods like sealed secrets, anyone with access to the cluster to see the decrypted secrets.
- Restrict Secret access to specific containers. While running multiple containers in a pod, define your volume mount or environment variable configuration in such a way that only the container that needs the secret has access to it.
- Manage how your applications handle secrets. Once your applications read the secrets, they have access to its confidential information. Ensure they do not share it with any untrusted parties or display it on logs.
- Prefer a central secret store. Having your Kubernetes secrets stored in a centralized place along with your other tool and database secrets would allow for easier management. This helps reduce secret sprawl and allows for better access control and audit trails.
Argonaut has a native secret management solution for scaling teams. Argonaut integration with third-party secret providers is coming soon (Q1CY23).
With Argonaut’s modern deployment platform, you can get up and running on AWS or GCP in minutes, not months. Our intuitive UI and quick integrations with GitHub, GitLab, AWS, and GCP make managing your infra and applications easy - all in one place.
Top comments (0)