Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Kubelet stopped posting node status (Kubernetes)

I am running a kubernetes cluster on EKS with two worker nodes. Both nodes are showing NotReady status and when I checked the kubelet logs on both nodes, there are below errors

k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Unauthorized k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Unauthorized k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Unauthorized

Is there anyway I can check which credentials are being used and how to fix this error?

like image 730
i_1108 Avatar asked Sep 15 '25 15:09

i_1108


1 Answers

Check the aws-auth ConfigMap whether the Role used by the node has proper permissions. Also you enable the EKS control plane logs on cloudwatch and check the authenticator logs on what Role is being denied access.

You can reset the configmap anytime with the same user/role that was used to create the cluster, even if it is not present in the configmap.

It is important that you do not delete this role/user from IAM.

like image 93
praveen.chandran Avatar answered Sep 17 '25 12:09

praveen.chandran