A Freelance DevOps doing container stuff and automating unhealthy amounts of software.
Need something automated or containerized? Feel free to hit me up :)
When we switched to kubernetes a few weeks back, we had quite a few services to migrate from our docker compose setup.
So we happily migrated with the utmost speed for our review, but one microservice didn't behave. Traefik, our ingress just didn't see it. The ingress was picked up fine, but pings and curls went nowhere. Our deployment was stuck in some kind of void.
I kid you not, I debugged this for a whole week straight. I did everything, from updating software versions to tearing down and setting up the whole environment again multiple times (thankfully this is completely automated by now).
I finally gave up and migrated a new service, just to have something to show for in our review. I wrote my three k8s files and, as I expected, it worked smoothly. Which bugged me even more!
So I tackled the broken service again, and did a stupid vimdiff to compare it with a working service. Then it struck me, in colorful diff text. A label was wrong....
Since not everyone is familiar with Kubernetes, here are the 3 most important files of a kubernetes deployment:
deployment.yaml
service.yaml
ingress.yaml
For kubernetes to know that a service belongs to a deployment, you have to set labels as glue.
Guess what happens if the service up there does not have app: prometheus but ms: prometheus.
Kubernetes has no idea it belongs to the deployment with
matchLabels:
app: prometheus
and routes requests to dev/null. Guess what the reason was? A colleague copied templates of those three files from some blog, not checking if the labels were correct. And the dumbass that I am was expecting that to be correct and looked everywhere else..
So that was my nightmare for a week.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
When we switched to kubernetes a few weeks back, we had quite a few services to migrate from our docker compose setup.
So we happily migrated with the utmost speed for our review, but one microservice didn't behave. Traefik, our ingress just didn't see it. The ingress was picked up fine, but pings and curls went nowhere. Our deployment was stuck in some kind of void.
I kid you not, I debugged this for a whole week straight. I did everything, from updating software versions to tearing down and setting up the whole environment again multiple times (thankfully this is completely automated by now).
I finally gave up and migrated a new service, just to have something to show for in our review. I wrote my three k8s files and, as I expected, it worked smoothly. Which bugged me even more!
So I tackled the broken service again, and did a stupid vimdiff to compare it with a working service. Then it struck me, in colorful diff text. A label was wrong....
Since not everyone is familiar with Kubernetes, here are the 3 most important files of a kubernetes deployment:
For kubernetes to know that a service belongs to a deployment, you have to set labels as glue.
Example:
Guess what happens if the service up there does not have
app: prometheus
butms: prometheus
.Kubernetes has no idea it belongs to the deployment with
and routes requests to dev/null. Guess what the reason was? A colleague copied templates of those three files from some blog, not checking if the labels were correct. And the dumbass that I am was expecting that to be correct and looked everywhere else..
So that was my nightmare for a week.