I am wondering if there is an easy way to make use of Amazon EFS ( Elastic File System ) to be mounted as a volume in a local docker-compose setup.
Reason being that for local development, volumes created are persisted on my laptop - If I were to change machines, I can't access any of that underlying data. A cloud NFS would solve this problem as it would be readily available from anywhere.
The AWS documentation (https://docs.aws.amazon.com/efs/latest/ug/efs-onpremises.html) seems to suggest the use of AWS Direct Connect / VPN - is there any way to avoid this by opening port 2049 (NFS traffic) in a security group that listens on all IP addresses, and applying that security group to a newly created EFS?
Here is my docker-compose.yml:
version: "3.2"
 
services:
  postgres_db:
    container_name: "postgres"
    image: "postgres:13"
    ports:
      - 5432:5432
    volumes:
      - type: volume
        source: postgres_data
        target: /var/lib/postgresql/data
        volume:
          nocopy: true
    environment: 
      POSTGRES_USER: 'admin'
      POSTGRES_PASSWORD: "password"
 
volumes: 
  postgres_data:
    driver_opts:
      type: "nfs"
      o: "addr=xxx.xx.xx.xx,nolock,soft,rw"
      device: ":/docker/example"
I am getting the below error:
ERROR: for postgres_db  Cannot start service postgres_db: error while mounting volume '/var/lib/docker/volumes/flowstate_example/_data': failed to mount local volume: mount :/docker/example:/var/lib/docker/volumes/flowstate_example/_data, data: addr=xxx.xx.xx.xx,nolock,soft: connection refused
Which I interpret to be that my laptop connection is not part of the AWS EFS VPC, hence it is unable to mount the EFS.
For added context, I am looking to dockerize a web scraping setup and have the data volume persisted in the cloud so I can connect to it from anywhere.
EFS assumes nfs4, so:
version: '3.8'
services:
  postgres:
    volumes:
      postgres_data:/var/lib/postgresql/data
volumes: 
  postgres_data:
    driver_opts:
      type: "nfs4"
      o: "addr=xxx.xx.xx.xx,nolock,soft,rw"
      device: ":/docker/postgres_data"
Of course, the referenced nfs-export/path must exist. Swarm will not automatically create non-existing folders.
Make sure to delete any old docker volumes of this faulty kind/name manually (on all swarm nodes!) before recreating the stack:
docker volume rm $(docker volume ls -f name=postgres_data -q)
This is important to understand: Docker NFS Volumes are actually only the declaration where to find the data. It does not update when you update your docker-compose.yml, hence you must remove the volume so any new configuration will appear
see output of
docker service ps stack_postgres --no-trunc
for more information why volume couldn't be mountet
Also make sure you can mount the nfs-export via mount -t nfs4 ...
see showmount -e your.efs.ip.address
volumes:
  nginx_test_vol:
    driver_opts:
      type: "nfs"
      o: "addr=fs-xxxxxxxxxxxxxxxxxx.efs.us-east-1.amazonaws.com,rw,nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport"
      device: ":/nginx-test"
This makes me use it very well
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With