There are 2 types of one-shot containers based on how they handle recovery in case of error, differentiation between these 2 approaches is from a CLI modifier called "--restart".
- Value of this modifier as "OnFailure" will ensure resource created by kubectl run will be restarted if it does not exit in a clean way (Checked using exit return code).
- Value of this modifier as "Never" will do nothing regardless of how resource / pod exits.
Modifier called "--schedule" to kubectl run command will allow us to supply cron expression and will schedule pods accordingly. One-shot container modifiers can also be used along with scheduling modifiers.
This is command for scheduling a container:
# Sample command kubectl create cronjob <job-name> --schedule="<schedule>" --restart=OnFailure --image=<image-name> -- <command-to-container> # E.g. Command kubectl create cronjob test-cronjob --schedule="*/3 * * * *" --restart=OnFailure --image=alpine -- sleep 10
Example of container with restart on failure and schedule is a batch job that needs to work on batch of data on some frequency.
I like commands that use intuitive, multi-level switches to provide help on that topic. Thankfully kubectl does follow that practice, it'll not only auto-complete keywords, on every level you can just provide "-h" switch display help on all available CLI options, their meaning and some good examples. I love it when developers put thought to make their CLIs really good and helpful, particularly when someone like myself (with least possible main memory) is using it! ;)
- It seems like logs command that kubectl supports just limits no. of pods that it can pull it's data from to just 8! It does make sense since it's internally making round trips to API service and anything over 8 (Magic number ;) ) seems to be harmful that API layer that's a center-piece of whole Kubernetes architecture.
- As I had mentioned previously, logs command when run without a filter seems to latch onto a single pod (not even round-robin) so it feels like logs command in kubectl is good for say local development while work is in progress but not a great choice to run it in production.
- My application of Kubenetes is for cloud and specifically on AWS as their offering - EKS (AWS Managed Kubernetes Cluster) so seems like I'll have to find this "logging" part really well while using EKS. (I still haven't looked at EKS myself but deep down every fibre of my body is telling me that AWS would have supported option to route all logs to CloudWatch logs out of box)
- One option to manage logs that I learned from my Kubernetes training / course if called Stern.
Seems like a good tool, I gave it a try, it has all options normal logs has plus some more. I would encourage everyone to try this out for local usage.
# Multiple resources can be deleted together kubectl delete "<resource-type>/<resource-name>" "<resource-type>/<resource-name>"
Delete does not mean, delete right away! It'll still follow a wait time while pods move to "Terminating" state and then finally be killed off.
Phoof, this was a long post that I had anticipated ...