Originally published at https://developer-friendly.blog on April 29, 2024.
External Secrets Operator: Fetching AWS SSM Parameters into Azure AKS
How to pass your secrets to the Kubernetes cluster without hard-coding them into your source code or manually creating the Kubernetes Secret resource.
Introduction
Deploying an application rarely is just the application itself. It is usually all the tooling and infrastructure around it that makes it work and produce the value it was meant to.
One of the most common things that applications need is secrets. Secrets are sensitive information that the application needs to function properly. This can be database passwords, API keys, or any other sensitive information the app uses to function and communicate with all the relevant external services.
In Kubernetes, the most common way to pass secrets to the application is by creating a Kubernetes Secret resource1. This resource is a Kubernetes object that stores sensitive information in the cluster. The application can then access this information by mounting the secret as a volume or by passing it as environment variables.
However, creating a Kubernetes Secret resource manually is a tedious task, especially when working at scale. Not to mention the maintenance required to rotate the secrets periodically and keep them in sync with the upstream.
On the other hand, passing secrets as hard-coded and plaintext value is a no-no when it comes to security. As much of a common sense as it is, going around the industry and seeing how people hard-code their secrets into the source code pains my soul.
In this article, we will explore how to use the External Secrets Operator2 to pass secrets to the Kubernetes cluster without hard-coding them into the source code or manually creating the Kubernetes Secret resource.
Roadmap
Before we start, let's set a clear objective of what we want to achieve in this article. The links will take you to the respective sections of this article.
First off, we'll create an Azure AKS Kubernetes cluster using the official OpenTofu module. The AKS cluster will have its OpenID Connect endpoint exposed to the internet.
We will use that OpenID Connect endpoint to establish a trust relationship between the Kubernetes cluster and the AWS IAM, leveraging OpenID Connect. This trust relationship will allow the Kubernetes cluster's Service Accounts to assume an IAM Role with web identity to access AWS resources.
Afterwards, we will deploy the External Secrets operator to the Kubernetes cluster passing the right Service Account to its running pod so that it can assume the proper AWS IAM Role3.
With that set up, the External Secrets operator will be able to read the secrets from the AWS SSM Parameter Store and create Kubernetes Secrets from them.
At this point, any pod in the same namespace as the target Secret will be able to mount and read its values business as usual.
Ultimately, we'll also cover how to allow the External Secrets operator to write back to the AWS SSM Parameter Store the values of the Kubernetes Secrets we want it to. An example include deploying a database with a generated password and storing that password back to the AWS SSM Parameter Store for references by other services or applications.
With that said, let's get started!
OpenID Connect, in simple terms, is a protocol that allows one service to authenticate and authorize another service, optionally on behalf of a user. It is an authentication layer on top of OAuth2.0 protocol.
If you're new to the topic, we have a practical example to solidify your
understanding in our guide on
OIDC Authentication.
"Why AWS SSM instead of Azure Key Vault?"
The External Secrets operator supports multiple backends for storing secrets. One of the most common backends is the AWS SSM Parameter Store. It is easy to set up, free to use, and has the least amount of drama around it.
However, I'm not here to dictate what you should and shouldn't use in your stack. If Azure Key Vault works for you, by all means, go for it. Choosing a tech stack goes beyond just how sexy it looks on the resume, or how fun of a Developer Experience it provides!
Prerequisites
Before we start, you need to have the following prerequisites:
- A Kubernetes cluster v1.29-ish. Feel free to follow earlier guides to set up the Kubernetes cluster the Hard Way or using k3s. Although we'll spin up a new Kubernetes cluster using Azure AKS TF module4.
- Internet accessible endpoint to your Kubernetes API server (You're free to run a local Kubernetes cluster and expose it to the internet using tools like ngrok5 or telepresence6.). We have covered how to expose your Kubernetes API server in last week's guide. Azure AKS, however, comes with a public OpenID Connect endpoint by default7.
- An AWS account with the permissions to read and write SSM parameters and to create OIDC provider and IAM roles.
- OpenTofu v1.6 8
- AZ CLI v2 Installed9. Only required if you're operating within Azure.
- Optionally, FluxCD v2.210 installed in your cluster. Not required if you aim to use bare Helm commands for installations. There is a beginner friendly guide to FluxCD in our archive if you're new to the topic.
Step 0: Setting up Azure Managed Kubernetes Cluster
First things first, let's set up the Azure AKS Kubernetes cluster using the official TF module4.
# aks/variables.tf
variable "resource_group_name" {
type = string
default = "developer-friendly-aks"
}
variable "location" {
type = string
default = "Germany West Central"
}
variable "kubernetes_version" {
type = string
default = "1.29"
}
variable "admin_username" {
type = string
default = "admin"
}
variable "agents_count" {
type = number
default = 1
description = "Number of worker nodes as Azure calls it."
}
variable "agents_size" {
type = string
default = "Standard_B2ms" # 2 vCPUs, 8 GiB memory
}
variable "prefix" {
type = string
default = "developer-friendly"
}
# aks/versions.tf
terraform {
required_providers {
http = {
source = "hashicorp/http"
version = "~> 3.4"
}
tls = {
source = "hashicorp/tls"
version = "~> 4.0"
}
null = {
source = "hashicorp/null"
version = "~> 3.2"
}
}
}
provider "azurerm" {
features {}
}
provider "azuread" {
}
# aks/main.tf
data "http" "this" {
url = "https://checkip.amazonaws.com"
}
resource "tls_private_key" "this" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "azurerm_resource_group" "this" {
name = var.resource_group_name
location = var.location
}
module "aks" {
source = "Azure/aks/azurerm"
version = "8.0.0"
prefix = var.prefix
resource_group_name = azurerm_resource_group.this.name
location = azurerm_resource_group.this.location
kubernetes_version = var.kubernetes_version
admin_username = var.admin_username
agents_count = var.agents_count
agents_size = var.agents_size
network_plugin = "azure"
network_plugin_mode = "overlay"
ebpf_data_plane = "cilium"
oidc_issuer_enabled = true
only_critical_addons_enabled = true
public_ssh_key = tls_private_key.this.public_key_openssh
rbac_aad = true
rbac_aad_managed = true
rbac_aad_azure_rbac_enabled = false
role_based_access_control_enabled = true
log_analytics_workspace_enabled = false
identity_type = "SystemAssigned"
api_server_authorized_ip_ranges = [
"${trimspace(data.http.this.response_body)}/32",
]
depends_on = [
azurerm_resource_group.this
]
}
resource "null_resource" "this" {
triggers = {
aks_id = module.aks.aks_id
}
provisioner "local-exec" {
command = <<-EOT
az aks get-credentials \
--resource-group ${var.resource_group_name} \
--name ${module.aks.aks_name} \
--admin
EOT
}
depends_on = [
module.aks
]
}
# aks/outputs.tf
output "oidc_issuer_url" {
value = module.aks.oidc_issuer_url
}
Having this TF code, we now need to apply it to our Azure account.
Authenticating to Azure
The first requirement is to be authenticated to Azure API. There are more than one ways to authenticate to Azure11. The most common way, and the one we'll use today, is by authenticating to Azure CLI12.
Authenticate to Azure Using AZ CLI
For your reference, here's a quick way to authenticate to Azure:
az login --use-device-code
This command will print out a URL and a code. Open the URL in your
browser and enter the code when prompted. This will authenticate you to
Azure CLI.
Once authenticated, and if the default subscription and tenant-id is not set, you can set them as environment variables just like the following:
export ARM_SUBSCRIPTION_ID=159f2485-xxxx-xxxx-xxxx-xxxxxxxxxxxx
export ARM_TENANT_ID=72f988bf-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Applying the TF Code
Lastly, once all is set, you can apply the TF code to create the AKS cluster.
tofu init
tofu plan -out tfplan
tofu apply tfplan
Creating the resources in this stack shall take about ~20 minutes to complete. Once done, you should have a fully functional Azure AKS cluster with the OpenID Connect endpoint exposed to the internet.
The output of this TF code will, as specified in our code, be an OIDC issuer URL. We are going to use this URL to establish a trust relationship between the Kubernetes cluster and the AWS IAM in the next step.
The null resource in our TF code will add or update your current kubeconfig file with the new AKS cluster credentials13. We will use this in a later step.
Step 1: Establishing Azure AKS Trust Relationship with AWS IAM
This step aims to facilitate and enable the API calls from the pods inside the Kubernetes cluster to the AWS services. As we have seen earlier, this is what OpenID Connect is all about.
Let's write the TF code to create the OIDC provider in the AWS.
# aws-oidc/variables.tf
variable "oidc_issuer_url" {
type = string
default = null
description = "The OIDC issuer URL. Pass this value to override the one received from the aks/terraform.tfstate file."
}
variable "access_token_audience" {
type = string
default = "sts.amazonaws.com"
description = "The audience for the tokens issued by the identity provider in the AKS cluster."
}
variable "iam_role_name" {
type = string
default = "external-secrets"
description = "The name of the IAM role."
}
variable "service_account_namespace" {
type = string
default = "external-secrets"
description = "The namespace of the service account."
}
variable "service_account_name" {
type = string
default = "external-secrets"
description = "The name of the service account."
}
# aws-oidc/versions.tf
terraform {
required_providers {
tls = {
source = "hashicorp/tls"
version = "~> 4.0"
}
aws = {
source = "hashicorp/aws"
version = "~> 5.46"
}
}
}
# aws-oidc/main.tf
data "terraform_remote_state" "k8s" {
count = var.oidc_issuer_url != null ? 0 : 1
backend = "local"
config = {
path = "../k8s/terraform.tfstate"
}
}
locals {
oidc_issuer_url = try(var.oidc_issuer_url, data.terraform_remote_state.k8s[0].outputs.oidc_provider_url)
}
data "tls_certificate" "this" {
url = local.oidc_issuer_url
}
resource "aws_iam_openid_connect_provider" "this" {
url = local.oidc_issuer_url
client_id_list = [
var.access_token_audience
]
thumbprint_list = [
data.tls_certificate.this.certificates[0].sha1_fingerprint
]
}
data "aws_iam_policy_document" "this" {
statement {
actions = [
"sts:AssumeRoleWithWebIdentity"
]
effect = "Allow"
principals {
type = "Federated"
identifiers = [
aws_iam_openid_connect_provider.this.arn
]
}
condition {
test = "StringEquals"
variable = "${aws_iam_openid_connect_provider.this.url}:aud"
values = [
var.access_token_audience
]
}
condition {
test = "StringEquals"
variable = "${aws_iam_openid_connect_provider.this.url}:sub"
values = [
"system:serviceaccount:${var.service_account_namespace}:${var.service_account_name}",
]
}
}
}
resource "aws_iam_role" "this" {
name = var.iam_role_name
assume_role_policy = data.aws_iam_policy_document.this.json
managed_policy_arns = [
"arn:aws:iam::aws:policy/AmazonSSMFullAccess",
]
}
# aws-oidc/outputs.tf
output "iam_role_arn" {
value = aws_iam_role.this.arn
}
output "access_token_audience" {
value = var.access_token_audience
}
output "service_account_name" {
value = var.service_account_name
}
output "service_account_namespace" {
value = var.service_account_namespace
}
The code should be self-explanatory, especially at this point after covering three blog posts on the topic of OpenID Connect.
But, let's emphasize the highlighting points:
- When it comes to AWS IAM assume role, there are five types of trust relationships14. In this scenario, we are using the Web Identity trust relationship type15.
- Having the principal as
Federated
is just as the name suggests; it is a federated identity provider. In this case, it is the Azure AKS OIDC issuer URL. In simple English, it allows the Kubernetes cluster to sign the access tokens, and the AWS IAM will trust those tokens if theiss
claim of their tokens match the trusted URL. - Having two conditionals on the audience (
aud
) and the subject (sub
) allows for a tighter security control and to enforce the principle of least privilege16. The target Kubernetes Service Account is the only one who is able to assume this IAM Role and is only capable of doing the permissions assigned, but no more. This enhances the overall security posture of the system.
export AWS_PROFILE="PLACEHOLDER"
tofu plan -out tfplan
tofu apply tfplan
IAM Policy Document
You may have seen the IAM policy document as JSON string in other TF codes. Truth be told, there is no one-size-fits-all. Do whatever works best for you.
I prefer writing my IAM policy documents as TF code because every other code in this module is written in HCL format. It is easier to maintain and read when everything is in the same format and there will be less mental gymnastics when a future engineer, or even myself, comes back to this code.
But, of course, I understand that when the IAM policy gets bigger, there is a very good reason to write it in the JSON format.
For your reference, this is the equivalent TF code when writing it in the JSON format:
resource "aws_iam_role" "this" {
name = var.iam_role_name
assume_role_policy = jsonencode({
"Statement" : [
{
"Action" : "sts:AssumeRoleWithWebIdentity",
"Effect" : "Allow",
"Principal" : {
"Federated" : "${aws_iam_openid_connect_provider.this.arn}"
},
"Condition" : {
"StringEquals" : {
"${aws_iam_openid_connect_provider.this.url}:aud" : "${var.access_token_audience}",
"${aws_iam_openid_connect_provider.this.url}:sub" : "system:serviceaccount:${var.service_account_namespace}:${var.service_account_name}"
}
}
}
]
})
managed_policy_arns = [
"arn:aws:iam::aws:policy/AmazonSSMFullAccess",
]
}
Pick what's best and more appealing to you and your team and stick with it. Don't let any clown 🤡 tell you otherwise, including myself! 😎
Step 2: Deploying External Secrets Operator
At its simplest form, you can easily install yours with helm install
. However, my preferred way of Kubernetes deployments is through GitOps, and FluxCD is my go-to tool for that.
# external-secrets/namespace.yml
apiVersion: v1
kind: Namespace
metadata:
name: external-secrets
# external-secrets/repository.yml
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
name: external-secrets
spec:
interval: 60m
url: https://charts.external-secrets.io
# external-secrets/release.yml
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: external-secrets
spec:
chart:
spec:
chart: external-secrets
sourceRef:
kind: HelmRepository
name: external-secrets
version: 0.9.x
install:
crds: Create
interval: 30m
maxHistory: 10
test:
enable: true
ignoreFailures: false
timeout: 5m
timeout: 10m
upgrade:
cleanupOnFail: true
crds: CreateReplace
force: true
preserveValues: true
remediation:
remediateLastFailure: true
valuesFrom:
- kind: ConfigMap
name: external-secrets-values
# external-secrets/kustomizeconfig.yml
nameReference:
- kind: ConfigMap
version: v1
fieldSpecs:
- path: spec/valuesFrom/name
kind: HelmRelease
# external-secrets/kustomization.yml
configMapGenerator:
- name: external-secrets-values
files:
- values.yaml=./values.yml
resources:
- namespace.yml
- repository.yml
- release.yml
configurations:
- kustomizeconfig.yml
namespace: external-secrets
Helm Values File
You don't have to necessarily commit the Helm values file into your source code. But it comes with a huge benefit when trying to upgrade to a newer version and you want to know what changes to expect during a code review.
helm show values \
external-secrets/external-secrets \
--version 0.9.x \
> external-secrets/values.yml
And the content:
# external-secrets/values.yml
# ... truncated for brevity ...
# NOTE: In a production scenario, this would contain the full contents of
# the source helm values file.
serviceAccount:
# We will patch the following later in our TF code
annotations: {}
If you have set up your directory structure to be traversed in a recursive fashion by FluxCD, you'd only push this to the upstream and the live state will reconcile as specified.
Otherwise, apply the following manifests to create the FluxCD Kustomization:
# gitops/gitrepo.yml
apiVersion: source.toolkit.fluxcd.io/v1
kind: GitRepository
metadata:
name: flux-system
namespace: flux-system
spec:
interval: 1m
ref:
branch: main
timeout: 60s
url: https://github.com/developer-friendly/external-secrets-guide.git
# gitops/external-secrets.yml
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: external-secrets
namespace: external-secrets
spec:
force: false
healthChecks:
- apiVersion: apps/v1
kind: Deployment
name: external-secrets
namespace: external-secrets
interval: 5m
path: ./external-secrets
prune: true
sourceRef:
kind: GitRepository
name: flux-system
namespace: flux-system
suspend: false
targetNamespace: external-secrets
timeout: 600s
wait: true
Step 3: Create the Secret Store
At this point, we should have a Kubernetes cluster with the External Secrets operator running on it. It should also be able to assume the AWS IAM Role we created earlier by leveraging the OIDC trust relationship.
In External Secrets operator, the SecretStore
and ClusterSecretStore
are the proxies to the external secrets management systems. They are responsible for fetching or creating the secrets from the external systems and creating the Kubernetes Secrets from them17.
Let us create a ClusterSecretStore
that will be responsible for fetching or creating AWS SSM Parameters.
# cluster-secret-store/variables.tf
variable "aws_region" {
type = string
default = "eu-central-1"
}
variable "cluster_secret_store_name" {
type = string
default = "aws-parameter-store"
}
variable "kubeconfig_path" {
type = string
default = "~/.kube/config"
}
variable "kubeconfig_context" {
type = string
default = "developer-friendly-aks-admin"
}
variable "field_manager" {
type = string
default = "flux-client-side-apply"
}
# cluster-secret-store/versions.tf
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.29"
}
}
}
provider "kubernetes" {
config_path = var.kubeconfig_path
config_context = var.kubeconfig_context
}
# cluster-secret-store/main.tf
data "terraform_remote_state" "k8s" {
backend = "local"
config = {
path = "../aws-oidc/terraform.tfstate"
}
}
resource "kubernetes_annotations" "this" {
api_version = "v1"
kind = "ServiceAccount"
metadata {
name = data.terraform_remote_state.k8s.outputs.service_account_name
namespace = data.terraform_remote_state.k8s.outputs.service_account_namespace
}
annotations = {
"eks.amazonaws.com/audience" : data.terraform_remote_state.k8s.outputs.access_token_audience
"eks.amazonaws.com/role-arn" : data.terraform_remote_state.k8s.outputs.iam_role_arn
}
field_manager = var.field_manager
}
resource "kubernetes_manifest" "this" {
manifest = {
apiVersion = "external-secrets.io/v1beta1"
kind = "ClusterSecretStore"
metadata = {
name = var.cluster_secret_store_name
}
spec = {
provider = {
aws = {
region = var.aws_region
service = "ParameterStore"
auth = {
jwt = {
serviceAccountRef = {
name = data.terraform_remote_state.k8s.outputs.service_account_name
namespace = data.terraform_remote_state.k8s.outputs.service_account_namespace
}
}
}
}
}
}
}
depends_on = [
kubernetes_annotations.this
]
}
Service Account Annotations Hack
You will notice that there are two annotations to theexternal-secrets
Service Account that are, suspiciously, sounding like an AWS EKS Kubernetes cluster thing. That is, these are specifically the annotations that only AWS EKS understands and acts upon.
This is an unfortunate mishap. If you're curious to read the full details, I have provided a very long and detailed explanation in their GitHub repository's issue18.
The gist of that discussion, if you're not feeling like reading my whole rambling, is that the External Secrets operator is not able to assume IAM Role with Web Identity outside the AWS EKS Kubernetes cluster; that is, you'll only get the benefit of OpenID Connect if only you're within AWS19 as far as External Secrets operator is concerned.
That is something I consider to be a bug! It shouldn't be the case and they should be able to handle Kubernetes clusters where we wouldn't want to manually pass the AWS credentials to the pods.
Step 4: Test the Setup: Creating ExternalSecret and PushSecret
That's it guys!
We have done all the hard works and it's time for pay off. Let's create anExternalSecret
and a PushSecret
to test the setup.
In this step, as the tradition of this post has been so far, we won't go into sample hello-world examples. We will try to deploy a MongoDB database and an application that talks to it instead.
The objective for this section is as follows:
- Deploy a MongoDB application using Helm, pushing the auto-generated passwords to the AWS SSM Parameter Store using
PushSecret
CRD. - Deploy another application that uses the
ExternalSecret
CRD to fetch the newly created secret in AWS SSM Parameter Store and use it to connect to the MongoDB database.
If that gets you excited, let's get started! Although, I have to warn you, the rest of this tutorial is a piece of cake compared to what we have done so far.
Deploying MongoDB
MongoDB ARM64 Support
As of the writing of this blog post, the bitnami MongoDB Helm chart does not support ARM64 architecture. This is a known issue and there is an open issue for it 20.
If you're running on ARM64 architecture, you may want to either:
- Use a different Helm chart that supports ARM64.
- Deploy MongoDB manually using StatefulSet; the same approach I'll employ in this tutorial.
# mongodb/namespace.yml
apiVersion: v1
kind: Namespace
metadata:
name: mongodb
# mongodb/configs.env
MONGO_INITDB_DATABASE=app
MONGO_INITDB_ROOT_USERNAME=app
# mongodb/password.yml
apiVersion: generators.external-secrets.io/v1alpha1
kind: Password
metadata:
name: mongo-password
spec:
allowRepeat: false
noUpper: false
length: 32
symbols: 0
# mongodb/externalsecret.yml
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: mongodb-secrets
spec:
refreshInterval: 5m
dataFrom:
- sourceRef:
generatorRef:
apiVersion: generators.external-secrets.io/v1alpha1
kind: Password
name: mongo-password
rewrite:
- regexp:
source: password
target: MONGO_INITDB_ROOT_PASSWORD
target:
name: mongodb-secrets
# mongodb/service.yml
apiVersion: v1
kind: Service
metadata:
name: mongodb-headless
spec:
clusterIP: None
ports:
- name: mongodb
port: 27017
protocol: TCP
targetPort: mongodb
# mongodb/statefulset.yml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongodb
spec:
serviceName: mongodb-headless
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongo
image: mongo
ports:
- containerPort: 27017
name: mongodb
envFrom:
- configMapRef:
name: mongodb-config
- secretRef:
name: mongodb-secrets
volumeMounts:
- name: mongo-storage
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: mongo-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
# mongodb/kustomization.yml
configMapGenerator:
- name: mongodb-config
envs:
- configs.env
resources:
- namespace.yml
- password.yml
- externalsecret.yml
- service.yml
- statefulset.yml
replacements:
- source:
kind: StatefulSet
name: mongodb
fieldPath: spec.template.metadata.labels
targets:
- select:
kind: Service
name: mongodb-headless
fieldPaths:
- spec.selector
options:
create: true
images:
- name: mongo
newTag: "7"
namespace: mongodb
This Kustomization is valid and can be applied as is. I generally prefer reconciling my Kubernetes resource using FluxCD and GitOps. Here's theKustomization
resource for FluxCD:
# gitops/mongodb.yml
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: mongodb
namespace: flux-system
spec:
force: false
interval: 10m
path: ./mongodb
prune: true
sourceRef:
kind: GitRepository
name: flux-system
timeout: 3m0s
wait: true
This stack will be deployed in whole and as is. Here's what theSecret/mongodb-secrets
will look like.
apiVersion: v1
kind: Secret
metadata:
name: mongodb-secrets
namespace: mongodb
type: Opaque
immutable: false
data:
MONGO_INITDB_ROOT_PASSWORD: eXRROFZBM3pVYTJHcVBsZTdjMTZnc01iTHJ2a0g0OVg=
It's a bit out of scope for this guide, but notice that I am creating theKustsomization
resource in the flux-system
namespace, whereas the final MongDB Kustomization will be deployed in the mongodb
namespace. It's only because I want FluxCD to take care of GitOps, while MongoDB is deployed in its dedicated namespace.
PushSecret to AWS SSM Parameter Store
We have generated the MongoDB password using the External Secrets operator generator API. It is now time to store it in our secrets management system to later be used by other parts or applications.
# mongodb/kustomization.yml
configMapGenerator:
- name: mongodb-config
envs:
- configs.env
resources:
- namespace.yml
- password.yml
- externalsecret.yml
- service.yml
- statefulset.yml
- pushsecret.yml # <- notice this change
replacements:
- source:
kind: StatefulSet
name: mongodb
fieldPath: spec.template.metadata.labels
targets:
- select:
kind: Service
name: mongodb-headless
fieldPaths:
- spec.selector
options:
create: true
images:
- name: mongo
newTag: "7"
namespace: mongodb
# mongodb/pushsecret.yml
apiVersion: external-secrets.io/v1alpha1
kind: PushSecret
metadata:
name: mongodb
spec:
updatePolicy: IfNotExists
deletionPolicy: Delete
refreshInterval: 2m
secretStoreRefs:
- name: aws-parameter-store
kind: ClusterSecretStore
selector:
secret:
name: mongodb-secrets
data:
- match:
secretKey: MONGO_INITDB_ROOT_PASSWORD
remoteRef:
remoteKey: /mongodb/password
As of writing this article, the External Secrets operator and FluxCD do not work well together when it comes to generator API. Specifically, the FluxCD will try to recreate the Password21 resource resource on every tick of the
Kustomization.spec.interval
.This means that the initial password is gone by the time the second tick comes around.
This is possibly a known issue, one which I can see being discussed in their GitHub repository22.
Although I haven't found a fix by now, specifying updatePolicy: IfNotExists for the PushSecret makes sure that we won't lose the actual password initially used by the script to bootstrap the MongoDB database.
There may be a better way!
This will result in the following parameter to be created in our AWS account.
As you can see in the screenshot, the parameter type is set to String
. This is a bug and you can follow the discussion on the GitHub issue23.
Ideally, this parameter should be customizable in PushSecret.spec
and allow us to specify SecureString
instead.
Step 5: Deploying the Application that Uses the Secret
Now that we have our database set up and the password stored in the AWS SSM Parameter Store, we can deploy an application that uses this password to connect to the database.
# app/namespace.yml
apiVersion: v1
kind: Namespace
metadata:
name: app
# app/externalsecret.yml
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: app
spec:
data:
- remoteRef:
key: /mongodb/password
secretKey: mongodbPassword
refreshInterval: 1m
secretStoreRef:
kind: ClusterSecretStore
name: aws-parameter-store
target:
immutable: false
template:
data:
MONGO_DSN: mongodb://app:{{ .mongodbPassword | toString }}@mongodb-0.mongodb-headless.mongodb:27017/app?authSource=admin
mergePolicy: Replace
type: Opaque
# app/job.yml
apiVersion: batch/v1
kind: Job
metadata:
name: app
spec:
template:
spec:
containers:
- name: mongo
image: mongo
command:
- sh
- -c
- |
set -eu
mongosh --eval 'db.runCommand({ serverStatus : 1 })' "$MONGO_DSN"
envFrom:
- secretRef:
name: app
restartPolicy: Never
backoffLimit: 2
# app/kustomization.yml
resources:
- namespace.yml
- externalsecret.yml
- job.yml
images:
- name: mongo
newTag: "7"
namespace: app
Again, you can apply this stack as is, or create is FluxCD Kustomization.
# gitops/app.yml
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: app
namespace: flux-system
spec:
force: true
interval: 10m
path: ./app
prune: true
sourceRef:
kind: GitRepository
name: flux-system
timeout: 3m0s
wait: true
We have intentionally enabled the force
flag for this stack because Kubernetes jobs will complain if you modify any of the immutable fields. Forcing the stack to reconcile means that the job will be recreated with the new changes.
NOTE: You should evaluate the impact of the force
flag in your production environment. It may cause downtime if not used carefully. You should specifically consider the idempotency behavior of a recreated job for your appliction(s).
Conclusion
External Secrets operator is a really appealing tool for fetching secrets from external secrets management systems.
It empowers you to use your desired secrets management system without having to compromise on the security aspect of passing credentials around! I have always opted in for the External Secrets operator when it comes to day-to-day operations of a Kubernetes cluster.
With the mechanisms and APIs provided by the ESO, one can easily enhance the operations of the secrets in the daily operations of the Kubernetes cluster.
You have also seen the power of OIDC and how it can enhance the security posture of the system, as well as reducing the need and overhead of passing credentials around or having to worry about their rotations.
With the knowledge you have gained in this article, you should be able to deploy the External Secrets operator in your Kubernetes cluster and manage your secrets in a secure & efficient way and with a peace of mind.
I hope you have enjoyed reading this article as much as I have enjoyed writing it. Feel free to reach out through the links provided at the bottom of this article if you have any questions or feedback.
Until next time, ciao 🤠 & happy coding! 🦀 🐧
FAQ
Why not use the SOPS?
I'm not here to dictate what you should and shouldn't use in your stack. But, if you're here and reading this, I will give you my honest opinion.
I think Sops is great for what it's worth.
Yet, I find it truly concerning to commit even the encrypted versions of my secrets to the repository and hoping that no computer will ever be powerful enough to break or decompose them into their plaintext versions.
I find it disturbing to let myself at the mercy of the wild internet and push my luck with the most critical part of my workloads, that is, the secrets.
You are more than welcome to disagree, but I just wanted to say why I would never use Sops in my stack.
Why not use the Sealed Secrets, or Vault?
I have never used these tools. In fact, you're free to pick your stack as you please, if you find a good enough reason to do so.
I may, at some point, use Azure or Hashicorp Vault as a backend to the External Secrets operator, but that's a story for another day.
Why not use the AWS Secrets Manager?
The AWS SSM Parameter Store24, in its standard tier, is free to use and offers encryption out of the box. I wouldn't want to be charged extra money if I really don't have to.
- https://kubernetes.io/docs/concepts/configuration/secret/
- https://external-secrets.io/v0.9.16/
- https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html
- https://registry.terraform.io/modules/Azure/aks/azurerm/8.0.0
- https://ngrok.com/
- https://www.telepresence.io/
- https://learn.microsoft.com/en-us/azure/aks/use-oidc-issuer
- https://github.com/opentofu/opentofu/releases/tag/v1.6.2
- https://learn.microsoft.com/en-us/cli/azure/install-azure-cli
- https://github.com/fluxcd/flux2/releases/tag/v2.2.3
- https://registry.terraform.io/providers/hashicorp/azurerm/3.101.0/docs#authenticating-to-azure
- https://registry.terraform.io/providers/hashicorp/azurerm/3.101.0/docs/guides/azure_cli
- https://learn.microsoft.com/en-us/azure/aks/control-kubeconfig-access
- https://spacelift.io/blog/aws-iam-roles
- https://docs.aws.amazon.com/cli/latest/reference/sts/assume-role-with-web-identity.html
- https://en.wikipedia.org/wiki/Principle_of_least_privilege
- https://external-secrets.io/v0.9.16/api/clustersecretstore/
- https://github.com/external-secrets/external-secrets/issues/660#issuecomment-2080421742
- https://external-secrets.io/v0.9.16/provider/aws-parameter-store/#eks-service-account-credentials
- https://github.com/bitnami/charts/issues/3635
- https://external-secrets.io/v0.9.16/api/generator/password/
- https://github.com/external-secrets/external-secrets/discussions/2402
- https://github.com/external-secrets/external-secrets/issues/3422
- https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html
Top comments (0)