In development stages when we debug our Kubernetes deployments or playing with Helm charts we may get stuck with some strange errors. CreateContainerConfigError
and CreateContainerError
are two among them.
They occur when Kubernetes tries to create a container in a pod, but fails before the container enters the Running
state.
You can identify these errors by running the kubectl
get pods command β the pod status will show the error like this:
NAME READY STATUS RESTARTS AGE
my-pod-1 0/1 CreateContainerConfigError 0 1m23s
my-pod-2 0/1 CreateContainerError 0 1m55s
Brave guys from Komodor told us about the common causes of this error and how to resolve it. However, note there are many more causes of container startup errors, and many cases are difficult to diagnose and troubleshoot.
One of the obvious reasons of getting CreateContainerConfigError
is that k8s is not capable to pull a docker image to create a pod from. Because of lack of required secrets in operative namespace. The secret is tended to be unique, it cannot be accessed from pods outside its namespace!
A workaround involves manual copying required secrets from one namespace where it exists, e.g. in production one, to another, e.g. development namespace. Here's an example how to implement copying (thanks Revsys team for this tip):
kubectl get secret gitlab-registry --namespace=revsys-com --export -o yaml | kubectl apply --namespace=devspectrum-dev -f -
Top comments (0)