What is the best way to create a persistent volume claim with ReadWriteMany attaching the volume to multiple pods?
Based off the support table in https://kubernetes.io/docs/concepts/storage/persistent-volumes, GCEPersistentDisk does not support ReadWriteMany natively.
What is the best approach when working in the GCP GKE world? Should I be using a clustered file system such as CephFS or Glusterfs? Are there recommendations on what I should be using that is production ready?
I was able to get an NFS pod deployment configured following the steps here - https://medium.com/platformer-blog/nfs-persistent-volumes-with-kubernetes-a-case-study-ce1ed6e2c266 however it seems a bit hacky and adds another layer of complexity. It also seems to only allow one replica (which makes sense as the disk can't be mounted multiple times) so if/when the pod goes down, my persistent storage will as well.
Now we have a disk available to be used as PV in GKE. Next step is to, Create a Persistent volume named app-storage from the gke-pv disk. To use the persistent volume with the pod, we will create a persistent volume claim with the same name we use in the PV claimRef , ie app-storage-claim.
It's possible now with Cloud Filestore.
First create a Filestore instance.
gcloud filestore instances create nfs-server
    --project=[PROJECT_ID]
    --zone=us-central1-c
    --tier=STANDARD
    --file-share=name="vol1",capacity=1TB
    --network=name="default",reserved-ip-range="10.0.0.0/29"
Then create a persistent volume in GKE.
apiVersion: v1
kind: PersistentVolume
metadata:
  name: fileserver
spec:
  capacity:
    storage: 1T
  accessModes:
  - ReadWriteMany
  nfs:
    path: /vol1
    server: [IP_ADDRESS]
[IP_ADDRESS] is available in filestore instance details.
You can now request a persistent volume claim.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: fileserver-claim
spec:
  accessModes:
  - ReadWriteMany
  storageClassName: "fileserver"
  resources:
    requests:
      storage: 100G
Finally, mount the volume in your pod.
apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
  - name: my container
    image: nginx:latest
    volumeMounts:
    - mountPath: /workdir
      name: mypvc
  volumes:
  - name: mypvc
    persistentVolumeClaim:
      claimName: fileserver-claim
      readOnly: false
Solution is detailed here : https://cloud.google.com/filestore/docs/accessing-fileshares
I agree that it's disappointing but it's a consequence of the use of persistent disk which does not permit attaching to multiple instances read-write.
I've had success with NFS and with the limitations you describe.
You could -- as you state -- use Gluster or similar too.
A more expensive albeit managed Google Cloud alternative is Cloud Filestore: https://cloud.google.com/filestore/docs/accessing-fileshares
Your questions suggests that you need NFS-like semantics but, if you don't, you may consider using Google Cloud Storage.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With