I'm running an AWS EC2 Ubuntu instance with EBS storage initially of 8GB.
This is now 99.8% full, so I've followed AWS documentation instructions to increase the EBS volume to 16GB. I now need to extend my partition /dev/xvda1 to 16GB, but when I run the command
$ growpart /dev/xvda 1 I get the error
mkdir: cannot create directory ‘/tmp/growpart.2626’: No space left on device I have tried
docker system prune -a (resulting in a "Cannot connect to the Docker daemon at unix:/var/run/docker.sock. Is the docker daemon running?" error. When I try to start the daemon using sudo dockerd, I get a "no space left on device" error as well)resize2fs /dev/xvda1 all to no avail.
Running lsblk returns
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT loop0     7:0    0   89M  1 loop /snap/core/7713 loop1     7:1    0   18M  1 loop /snap/amazon-ssm-agent/1480 loop2     7:2    0 89.1M  1 loop /snap/core/7917 loop3     7:3    0   18M  1 loop /snap/amazon-ssm-agent/1455 xvda    202:0    0   16G  0 disk └─xvda1 202:1    0    8G  0 part / df -h returns
Filesystem      Size  Used Avail Use% Mounted on udev            2.0G     0  2.0G   0% /dev tmpfs           395M   16M  379M   4% /run /dev/xvda1      7.7G  7.7G     0 100% / tmpfs           2.0G     0  2.0G   0% /dev/shm tmpfs           5.0M     0  5.0M   0% /run/lock tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup /dev/loop0       90M   90M     0 100% /snap/core/7713 /dev/loop1       18M   18M     0 100% /snap/amazon-ssm-agent/1480 /dev/loop2       90M   90M     0 100% /snap/core/7917 /dev/loop3       18M   18M     0 100% /snap/amazon-ssm-agent/1455 tmpfs           395M     0  395M   0% /run/user/1000 and df -i returns
Filesystem      Inodes  IUsed  IFree IUse% Mounted on udev            501743    296 501447    1% /dev tmpfs           504775    457 504318    1% /run /dev/xvda1     1024000 421259 602741   42% / tmpfs           504775      1 504774    1% /dev/shm tmpfs           504775      3 504772    1% /run/lock tmpfs           504775     18 504757    1% /sys/fs/cgroup /dev/loop0       12827  12827      0  100% /snap/core/7713 /dev/loop1          15     15      0  100% /snap/amazon-ssm-agent/1480 /dev/loop2       12829  12829      0  100% /snap/core/7917 /dev/loop3          15     15      0  100% /snap/amazon-ssm-agent/1455 tmpfs           504775     10 504765    1% /run/user/1000 To avoid No space left on device errors when expanding the root partition or root file system on your EBS volume, use the temporary file system, tmpfs, that resides in memory. Mount the tmpfs file system under the /tmp mount point, and then expand your root partition or root file system.
For anyone that has this problem, here's a link to the answer: https://aws.amazon.com/premiumsupport/knowledge-center/ebs-volume-size-increase/
Summary
df -h to verify your root partition is full (100%)lsblk and then lsblk -f to get block device detailssudo mount -o size=10M,rw,nodev,nosuid -t tmpfs tmpfs /tmpsudo growpart /dev/DEVICE_ID PARTITION_NUMBERlsblk to verify partition has expandedsudo resize2fs /dev/DEVICE_IDPARTITION_NUMBERdf -h to verify your resized disksudo umount /tmpI came across this article http://www.daniloaz.com/en/partitioning-and-resizing-the-ebs-root-volume-of-an-aws-ec2-instance/ and solved it with ideas from there.
Steps taken:
t2.micro instance, or use an existing one if you wish)lsblk to ensure the volume has been mounted correctlysudo growpart /dev/xvdf 1 (or similar, to expand the partition)lsblk to check that the partition has growndf -h to check the size of your root volume partition (e.g. /dev/xvda1)sudo resize2fs /dev/xvda1 (or similar) to resize your partitiondf -h to check that your Use% of /dev/xvda1 is no longer ~100%If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With