The Kubernetes API is changing all the time. With these changes come deprecations and eventual removals of parts of the API. To be able to keep an up-to-date Kubernetes cluster version, we have to identify deprecated APIs and update them. This may become tedious in larger clusters with hundreds of resources but tools like pluto can help.
What does API deprecation mean in Kubernetes?
The operations governing the lifecycle of all Kubernetes resources are provided via RESTful API endpoints by the Kubernetes API server. In other words, the Kubernetes API is the frontend of the Kubernetes control plane.
Resource APIs are associated with URIs like
-
/apis/GROUP/VERSION/*
for cluster-scoped resources, and -
/apis/GROUP/VERSION/namespaces/NAMESPACE/*
for namespace-scoped resources.
Some resources are also grouped into the core group (or, legacy group). Those are available via the special API endpoint /api/{version}
(see API groups). Pods, for example, are part of the core group. A request to list all pods in a given namespace my-namespace
is mapped to the following HTTP GET request.
GET /api/v1/namespaces/my-namespace/pods
(See the API documentation for the Pod resource for reference.)
The problem with Kubernetes API deprecations
Kubernetes specifies a deprecation policy that defines what it means if parts of an API become deprecated. Essentially, deprecation means that the associated endpoints of the Kubernetes API server are flagged for removal and subsequently deleted. Since the API server governs the resource lifecycle, using a resource with a removed API version, will prevent the deployment of that resource. Consequently, if we fail to update our resource API versions, we will either be stuck with an outdated Kubernetes version; or, updating to the new Kubernetes version will prevent deployments of certain resources. Both are undesirable states since we will either
- continue using an unstable Kubernetes version, or
- our Kubernetes deployment will be incomplete.
Deploying resources with removed API versions
To get a clearer picture, let's have a look at the second problem and see how Kubernetes responds if we try to deploy a resource using a removed API version. To do this, we spin up a local Kubernetes cluster with k3d
$ k3d cluster create
INFO[0000] Prep: Network
INFO[0004] Created network 'k3d-k3s-default'
INFO[0004] Created image volume k3d-k3s-default-images
INFO[0004] Starting new tools node...
INFO[0005] Pulling image 'ghcr.io/k3d-io/k3d-tools:5.4.3'
INFO[0007] Starting Node 'k3d-k3s-default-tools'
INFO[0007] Creating node 'k3d-k3s-default-server-0'
INFO[0009] Pulling image 'docker.io/rancher/k3s:v1.23.6-k3s1'
INFO[0027] Creating LoadBalancer 'k3d-k3s-default-serverlb'
INFO[0028] Pulling image 'ghcr.io/k3d-io/k3d-proxy:5.4.3'
INFO[0030] Using the k3d-tools node to gather environment information
INFO[0030] Starting new tools node...
INFO[0030] Starting Node 'k3d-k3s-default-tools'
INFO[0033] Starting cluster 'k3s-default'
INFO[0033] Starting servers...
INFO[0033] Starting Node 'k3d-k3s-default-server-0'
INFO[0040] All agents already running.
INFO[0040] Starting helpers...
INFO[0040] Starting Node 'k3d-k3s-default-serverlb'
INFO[0049] Injecting records for hostAliases (incl. host.k3d.internal) and for 3 network members into CoreDNS configmap...
INFO[0051] Cluster 'k3s-default' created successfully!
INFO[0051] You can now use it like this:
kubectl cluster-info
Then switch Kubernetes context e.g. using kubectx
$ kubectx k3d-k3s-default
Switched to context "k3d-k3s-default".
Now, we take a look at which version the Kubernetes API server is running by executing kubectl version
$ kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.2", GitCommit:"f66044f4361b9f1f96f0053dd46cb7dce5e990a8", GitTreeState:"clean", BuildDate:"2022-06-15T14:22:29Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"darwin/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.6+k3s1", GitCommit:"418c3fa858b69b12b9cefbcff0526f666a6236b9", GitTreeState:"clean", BuildDate:"2022-04-28T22:16:18Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"linux/amd64"}
As we can see from the Server Version: output, our k3d cluster is running Kubernetes v1.23.
By taking a look at the API deprecation guide we can see that two versions of the Ingress
API, extensions/v1beta1 and networking.k8s.io/v1beta1, were removed in v1.22 of Kubernetes. So let's try to deploy an Ingress
resource with that API version and see what happens. Below we have an example manifest that I shamelessly stole from the The Ingress Resource section of the official Kubernetes documentation and put it in a file ingress-pre-1.22.yaml
.
# ingress-pre-1.22.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx-example
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test
port:
number: 80
And then let's try to deploy it to our v1.23 Kubernetes cluster.
$ kubectl apply -f ingress-pre-1.22.yaml
error: resource mapping not found for name: "minimal-ingress" namespace: "" from "ingress-pre-1.22.yaml": no matches for kind "Ingress" in version "networking.k8s.io/v1beta1"
ensure CRDs are installed first
As we can see, the API returns an error indicating that networking.k8s.io/v1beta1
does not include an Ingress
type anymore. If we change the API version to networking.k8s.io/v1, however,
# ingress-1.22.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx-example
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test
port:
number: 80
our Ingress
will be created without a hitch.
$ kubectl apply -f ingress-1.22.yaml
ingress.networking.k8s.io/minimal-ingress created
Detecting API deprecations with pluto
In a more realistic scenario, we already have resources deployed to our cluster and want to keep their API versions up to date so that we can update our cluster version safely.
The question therefore is, how do we spot resources with deprecated and soon-to-be-removed API versions? One answer to that question is to check the deprecation guide mentioned earlier and check which resource API versions will be removed in the upcoming Kubernetes update. It is important to note, however, that if we skip several versions, we will have to repeat this check for all versions in between our current and target Kubernetes versions as well.
In large clusters with dozens of resource types and versions, this can become tedious and error-prone. Luckily, there are tools like pluto by FairwindOps which assist us in spotting deprecated and soon-to-be-removed resource API versions.
Let's for example deploy our Ingress
resource to a Kubernetes cluster with an API server version earlier than v1.22 (e.g. v1.19). Creating a k3d cluster with a non-current Kubernetes API version is possible by passing the --image
option to k3d specifying a k3s image for our desired Kubernetes version (e.g. v1.19.16-k3s1, full list of images is available on Dockerhub):
$ k3d cluster delete
INFO[0000] Deleting cluster 'k3s-default'
INFO[0002] Deleting cluster network 'k3d-k3s-default'
INFO[0005] Deleting 2 attached volumes...
WARN[0005] Failed to delete volume 'k3d-k3s-default-images' of cluster 'k3s-default': failed to find volume 'k3d-k3s-default-images': Error: No such volume: k3d-k3s-default-images -> Try to delete it manually
INFO[0005] Removing cluster details from default kubeconfig...
INFO[0005] Removing standalone kubeconfig file (if there is one)...
INFO[0005] Successfully deleted cluster k3s-default!
$ k3d cluster create --image rancher/k3s:v1.19.16-k3s1
INFO[0000] Prep: Network
INFO[0003] Created network 'k3d-k3s-default'
INFO[0003] Created image volume k3d-k3s-default-images
INFO[0003] Starting new tools node...
INFO[0003] Starting Node 'k3d-k3s-default-tools'
INFO[0006] Creating node 'k3d-k3s-default-server-0'
INFO[0009] Pulling image 'rancher/k3s:v1.19.16-k3s1'
INFO[0018] Creating LoadBalancer 'k3d-k3s-default-serverlb'
INFO[0018] Using the k3d-tools node to gather environment information
INFO[0018] Starting new tools node...
INFO[0018] Starting Node 'k3d-k3s-default-tools'
INFO[0021] Starting cluster 'k3s-default'
INFO[0021] Starting servers...
INFO[0021] Starting Node 'k3d-k3s-default-server-0'
INFO[0028] All agents already running.
INFO[0028] Starting helpers...
INFO[0028] Starting Node 'k3d-k3s-default-serverlb'
INFO[0037] Injecting records for hostAliases (incl. host.k3d.internal) and for 3 network members into CoreDNS configmap...
INFO[0039] Cluster 'k3s-default' created successfully!
INFO[0039] You can now use it like this:
kubectl cluster-info
We again confirm that the API server is running Kubernetes v1.19 with kubectl version
$ kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.2", GitCommit:"f66044f4361b9f1f96f0053dd46cb7dce5e990a8", GitTreeState:"clean", BuildDate:"2022-06-15T14:22:29Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"darwin/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.16+k3s1", GitCommit:"da16869555775cf17d4d97ffaf8a13b70bc738c2", GitTreeState:"clean", BuildDate:"2021-11-04T00:55:24Z", GoVersion:"go1.15.14", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.24) and server (1.19) exceeds the supported minor version skew of +/-1
Now, let's apply our Ingress
with the manifest specifying the deprecated API version again.
$ kubectl apply -f ingress-pre-1.22.yaml
Warning: networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
ingress.networking.k8s.io/minimal-ingress created
As we can see, we already get a helpful deprecation warning that recommends using networking.k8s.io/v1
instead of networking.k8s.io/v1beta1
. For now, we assume, however, that our resources already have been deployed to a running cluster and we want to detect those resources (without applying them again).
This is where tools like pluto
become handy to detect the use of deprecated resource API version.
Pluto is available on multiple platforms and via a variety of package managers. We'll use binenv here.
$ binenv install pluto
2022-07-21T19:47:18+02:00 WRN version for "pluto" not specified; using "5.8.0"
fetching pluto version 5.8.0 100% |█████████████████████████████████████████████████████████████████████████████████████████████████████████████████| (11/11 MB, 9.578 MB/s)
2022-07-21T19:47:22+02:00 INF "pluto" (5.8.0) installed
There is a couple of ways we can hand over resource manifests to pluto in order to detect deprecated API versions: for example via directory scan or by direct input. Additionally, there is a convenient integration with Helm that checks our releases for deprecations.
Directory scan
Using a directory scan requires that we know where to find the Kubernetes manifests for our cluster. In our case, this is pretty simple since we only have two simple manifests and they are both in the current directory. The following command makes pluto run a directory scan and detect our deprecated api version.
$ pluto detect-files --directory . --target-versions k8s=v1.22.0
NAME KIND VERSION REPLACEMENT REMOVED DEPRECATED
minimal-ingress Ingress networking.k8s.io/v1beta1 networking.k8s.io/v1 true true
Note that we can specify our target version(s) using the --target-versions
option. If we were to target Kubernetes v1.15.0 instead, pluto would return the empty list, because our CRD API version was deprecated only later on in v1.16.0.
$ pluto detect-files --directory . --target-versions k8s=v1.15.0
No output to display
Direct input
Using direct input, we can just pipe a resource manifest directly into pluto. This is particularly useful if we want to scan the resources already deployed to the cluster (and going over the manifests is too complicated). Since our CRD is already deployed we can obtain the resource manifest with kubectl get
and hand it to pluto.
$ kubectl get ingress minimal-ingress -o yaml | pluto detect -
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
NAME KIND VERSION REPLACEMENT REMOVED DEPRECATED
minimal-ingress Ingress extensions/v1beta1 networking.k8s.io/v1 true true
ℹ️ You may have noticed that pluto reports the deprecated API version as
extensions/v1beta1
instead ofnetworking.k8s.io/v1beta1
. This is the case because the API server in v1.19. apparently normalisesnetworking.k8s.io/v1beta1
toextensions/v1beta1
. The manifest of the deployedIngress
that is returned bykubectl get ingress minimal-ingress -o yaml
therefore has API versionextensions/v1beta1
.
Helm releases
If we deploy our resources with Helm, Pluto also provides a detect-helm
subcommand that checks our releases for deprecated API versions.
Summary
The Kubernetes API is constantly evolving. To keep our cluster up to date, we constantly have to watch for deprecated and soon-to-be-removed resource API versions. Manual checks are possible thanks to the Kubernetes deprecation guide, but they can become very tedious and error-prone. Tools like pluto allow us to automate API deprecation checks and simplify the effort involved in maintaining up-to-date resource API versions.
Top comments (0)