K8s can be installed with autoscaler github.com/kubernetes/autoscaler . In real life people utalizing cloud providers set their autoscaler most of the autoscaling goes by metrics and how pods are set . Lookup Kubernetes HPA (horizontal/vertical autoscaling) .
Example: each node can hold only 40 pods. Your deployment kicks in and creates deployment for additional 300 pods. Now autoscaler kicks in and creates worker nodes accordingly.
Same goes for metrics like cpu/ram etc that’s why it’s important to set boundaries on namespace resources.
Hope that answers the question if not, I’ll give some additional info when I get on laptop.
On my phone so don’t expect much than this:
K8s can be installed with autoscaler github.com/kubernetes/autoscaler . In real life people utalizing cloud providers set their autoscaler most of the autoscaling goes by metrics and how pods are set . Lookup Kubernetes HPA (horizontal/vertical autoscaling) .
Example: each node can hold only 40 pods. Your deployment kicks in and creates deployment for additional 300 pods. Now autoscaler kicks in and creates worker nodes accordingly.
Same goes for metrics like cpu/ram etc that’s why it’s important to set boundaries on namespace resources.
Hope that answers the question if not, I’ll give some additional info when I get on laptop.
so it seems that kubernetes is optimized for public clouds then ?