After trying to create a new Kubernetes cluster which was configured to require 300GB of persistent disk space (1 zone * 3 nodes * 100GB boot disk size), but ultimately failed to build due to the cluster deploy target zone not having enough resources, I am now trying deploy a new cluster with the same settings in a different zone after deleting the erring cluster from my previous attempt.
However, my Persistent Disk SSD quota for Compute Engine now says I have reached the limit I currently have set (500GB) when, in fact, I currently have no Compute Engine instances listed in my console and no disks to show after running $ gcloud compute disks list in my Cloud Shell Editor.
What am I missing here?
A few hypothesis I have:
Any input as to the root of this problem would be appreciated.
It turns out that either my second hypothesis was correct, and there was some sort of caching related delay, or whatever task(s) that are responsible for deallocating clusters and disk space take quite some time (hours) to complete and update the quota usages. I suspect the latter is true. I'll leave this question/answer up in case anyone else may find it useful.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With