Faced with a very interesting thing during removal of a Kubernetes Namespace.
After a kubectl delete namespace
NAMESPACE is executed, the namespace hangs in the Terminating state, and any attempt to forcibly remove it didn’t help.
First, let’s see how such a force-removal can be done, and then will check the real cause and a solution of such behavior.
Create a test namespace:
Try to remove it – and it hangs:
Check its state – it’s Terminating:
During this, nothing was printed to the API server logs.
Contents
Namespace removing ways
--force
и --grace-period
Okay, maybe there are some resources and the namespace is waiting for them? Find them all:
No, nothing.
And delete with the --force
and --grace-period=0
didn’t help:
The namespace is still present, and still in the Terminating state.
Clean up finalizers
After googling, almost every found solution told to remove the kubernetes from the finalizers – edit the namespace:
And there the kubernetes line:
... spec: finalizers: - kubernetes ...
Save changes – and happens nothing. The namespace in the same state, and the finalizers=kubernetes
comes back.
APIs: custom.metrics.k8s.io/v1beta1: the server is currently unable to handle the request
But keep reading, it gets better.
If execute the kubectl api-resources
command, you may see an error about custom.metrics.k8s.io:
That points me to the idea, that something is wrong with the metrics-server.
A metrics-server version?
The first thought was about the version being used, as we installed it a long time ago and still using 0.3.6:
Let’s try to install the latest one, 0.4.2:
And still nothing…
metrics-server arguments?
Well, maybe the is in the connection to the metrics-server?
Try to update --kubelet-insecure-tls
, --kubelet-preferred-address-types=InternalIP
and even enable hostNetwork=true
, like so:
... spec: hostNetwork: true containers: - args: - --cert-dir=/tmp - --secure-port=4443 - --kubelet-preferred-address-types=InternalIP - --kubelet-use-node-status-port - --kubelet-insecure-tls ...
Nope…
And during that, the kubectl top
command is working, so the metrics-server
service is working:
So, everything is good? Kubernetes can connect to it, and can receive metrics?
Kubernetes apiservices
Now, let’s go to the error message:
custom.metrics.k8s.io/v1beta1: the server is currently unable to handle the request
And check the apiservice v1beta1.custom.metrics.k8s.io
resource:
Aha, here it is!
monitoring/prometheus-adapter False (ServiceNotFound)
Kubernetes have to call the monitoring/prometheus-adapter
service, but check if it’s available:
No, prometheus-adapter is absent.
And that’s why the custom.metrics.k8s.io
was broken.
Install the adapter:
Check services again:
And apiservices:
And our namespace:
And after a few seconds:
Done.
Similar posts
Also published on Medium.