I'm trying to dynamically increase the maxPods of a k8s node. Currently its set to 30 and I want to increase it to 50. I'm following these instructions: https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/
On the node I added the --dynamic-config-dir
flag and restarted kubelet
I then added a configmap with maxPods set to 50 and added a configSource section to the node:
configSource:
configMap:
name: my-node-name
namespace: kube-system
kubeletConfigKey: kubelet
And I can verify the configmap has been applied via:
kubectl get no ${NODE_NAME} -o json | jq '.status.config'
{
"active": {
"configMap": {
"kubeletConfigKey": "kubelet",
"name": "my-node-config-c87h9fm782",
"namespace": "kube-system",
"resourceVersion": "54483288",
"uid": "144bd26f-d290-11ea-bb9d-5e7d8d22eed0"
}
},
"assigned": {
"configMap": {
"kubeletConfigKey": "kubelet",
"name": "my-node-config-c87h9fm782",
"namespace": "kube-system",
"resourceVersion": "54483288",
"uid": "144bd26f-d290-11ea-bb9d-5e7d8d22eed0"
}
},
"lastKnownGood": {
"configMap": {
"kubeletConfigKey": "kubelet",
"name": "my-node-config-c87h9fm782",
"namespace": "kube-system",
"resourceVersion": "54483288",
"uid": "144bd26f-d290-11ea-bb9d-5e7d8d22eed0"
}
}
}
But the node is still only running 30 pods and any attempt to deploy another pod fails with the pod in the Pending state.
I pulled down the configmap and I verified maxPods is set to 50.
What am I missing? Any help would be greatly appreciate
Thanks,
Jerry
Default Max POD Value per node is 110
ubuntu@k8s-master-1:~$ kubectl describe nodes k8s-worker-1 | grep -i "Capacity\|Allocatable" -A 6
Capacity:
cpu: 2
ephemeral-storage: 12131484Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 4046184Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 11180375636
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 3943784Ki
pods: 110
Find config file path for kubelet on worker node
ubuntu@k8s-worker-1:~$ ps -ef | grep -i kubelet
ubuntu 5077 5005 0 20:07 pts/0 00:00:00 grep --color=auto -i kubelet
root 23930 1 1 06:42 ? 00:15:18 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.2
Update/Add maxPods field on /var/lib/kubelet/config.yaml
as needed and restart kubelet on worker node
ubuntu@k8s-worker-1:~$ sudo cat /var/lib/kubelet/config.yaml
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 0s
cacheUnauthorizedTTL: 0s
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s
maxPods: 500
Verify pods value should be now changed after kubelet is restarted.
ubuntu@k8s-master-1:~$ kubectl describe nodes k8s-worker-1 | grep -i "Capacity\|Allocatable" -A 6
Capacity:
cpu: 2
ephemeral-storage: 12131484Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 4046184Ki
pods: 500
Allocatable:
cpu: 2
ephemeral-storage: 12131484Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 3943784Ki
pods: 500
If you don't specify maxPods when creating new node pools, you receive a default value of 30 for Azure CNI
You're able to configure the maximum number of pods per node at cluster deployment time or as you add new node pools. If you deploy with the Azure CLI or with a Resource Manager template, you can set the maximum pods per node value as high as 250.
You can change via Azure CLI - Specify the --max-pods argument when you deploy a cluster with the az aks create command. The maximum value is 250.
You can't change the maximum number of pods per node when you deploy a cluster with the Azure portal.
Please read here how to change the value in existing cluster
Reference Doc
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With