DEV Community

Cover image for Finding and Deleting Orphaned ConfigMaps
Patrick Londa for Blink Ops

Posted on • Originally published at blinkops.com

Finding and Deleting Orphaned ConfigMaps

If you don’t take steps to maintain your Kubernetes cluster, you could end up wasting money and storage on orphaned resources. Orphaned (or unused) resources, like ConfigMaps, Secrets, and Services, should be regularly located and removed to clear up storage space and prevent performance issues.

In this post, we’ll be focusing on how to find and remove orphaned ConfigMaps.

ConfigMaps are API objects created to hold small amounts of visible configuration data. These objects support unbinding of configuration data from container images and application code for optimum portability of applications, but they cannot hold secret/encrypted data.

ConfigMaps may get orphaned if they are left isolated from the deployment they were created to support, or if their owners have been purged. Once orphaned, these ConfigMaps waste temporary storage and increase the risk of cluster instability.

Finding and Deleting Orphaned ConfigMaps

Here are some steps you can take to find and remove orphaned ConfigMaps:

Step 1: Find all ConfigMaps

First off, you can generate a list of all ConfigMaps using this command:

kubectl get configmaps –all-namespaces -o json
Enter fullscreen mode Exit fullscreen mode

This command will return the list of ConfigMaps across all namespaces, but as you’ll see, the ConfigMap object does not reference its owner. You’ll need to run another command to identify which of the ConfigMaps have owners and are in use.

Step 2: Compare with a List of Used ConfigMaps

To find any orphaned ConfigMaps, you have to get the list of pods across your cluster and list all ConfigMaps in use. Alternatively you can use the following to diff the list of ConfigMaps and used ConfigMaps, and get unused ConfigMaps:

volumesCM=$( kubectl get pods -o
jsonpath='{.items[*].spec.volumes[*].configMap.name}' | xargs -n1)
volumesProjectedCM=$( kubectl get pods -o
jsonpath='{.items[*].spec.volumes[*].projected.sources[*].configMap.name}' | xargs -n1)
envCM=$( kubectl get pods -o
jsonpath='{.items[*].spec.containers[*].env[*].ValueFrom.configMapKeyRef.name}' | xargs -n1)
envFromCM=$( kubectl get pods -o
jsonpath='{.items[*].spec.containers[*].envFrom[*].configMapKeyRef.name}' | xargs -n1)

diff \
<(echo "$volumesCM\n$volumesProjectedCM\n$envCM\n$envFromCM" | sort | uniq) \
<(kubectl get configmaps -o jsonpath='{.items[*].metadata.name}' | xargs -n1 | sort | uniq)
Enter fullscreen mode Exit fullscreen mode

Finally, you can compare the two lists and delete ConfigMaps from the first list that are not in use by any pod.

Step 3: Delete Orphaned ConfigMaps

Now that you have a list of orphaned ConfigMaps, you can run this command to delete them and free up memory in your cluster:

kubectl delete configmap/samplemap
Enter fullscreen mode Exit fullscreen mode

Example output:

configmap "samplemap" deleted
Enter fullscreen mode Exit fullscreen mode

Once you’ve deleted all the orphaned ConfigMaps you found, you’ll have removed unneeded, unused resources from your cluster and freed up memory and storage space. If you remove orphaned resources regularly, you’ll ensure that your team is maintaining optimal Kubernetes resource management.

Thanks for reading! Let me know if this worked for you.

Top comments (0)