On my kubernetes cluster, I have 15% (~4GB) extra GB of memory taken by my pods compared to my memory requests. I suspect it's been the reason some of my nodes have been crashing lately. How can I easily find the misconfigured pods and add the missing limits (ie find pods without memory requests, or whose memory limits are too high compared to requests ?
You can use custom-columns as output format for a get request.
The query syntax is jsonpath, https://kubernetes.io/docs/reference/kubectl/jsonpath/.
For example
#!/bin/bash
ns='NAMESPACE:.metadata.namespace'
pod="POD:.metadata.name"
cont='CONTAINER:.spec.containers[*].name'
mreq='MEM_REQ:.spec.containers[*].resources.requests.memory'
mlim='MEM_LIM:.spec.containers[*].resources.limits.memory'
creq='CPU_REQ:.spec.containers[*].resources.requests.cpu'
clim='CPU_LIM:.spec.containers[*].resources.limits.cpu'
kubectl get pod -A -o custom-columns="$ns,$pod,$cont,$mreq,$mlim,$creq,$clim"
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With