I am Running an AWS ami using a T2.large instance using the US East. I was trying to upload some data and I ran in the terminal:
df -h
and I got this result:
Filesystem      Size  Used Avail Use% Mounted on
udev            3.9G     0  3.9G   0% /dev
tmpfs           799M  8.6M  790M   2% /run
/dev/xvda1      9.7G  9.6G   32M 100% /
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs           799M     0  799M   0% /run/user/1000
I know I have not uploaded 9.7 GB of data to the instance, but I don't know what /dev/xvda1 is or how to access it.
I also assume that all the tmpfs are temporal files, how can I erase those?
Answering some of the questions in the coments, I runned
sudo du -sh /*
And I got:
16M /bin
124M    /boot
0   /dev
6.5M    /etc
2.7G    /home
0   /initrd.img
0   /initrd.img.old
4.0K    /jupyterhub_cookie_secret
16K /jupyterhub.sqlite
268M    /lib
4.0K    /lib64
16K /lost+found
4.0K    /media
4.0K    /mnt 
562M    /opt
du: cannot access '/proc/15616/task/15616/fd/4': No such file or directory
du: cannot access '/proc/15616/task/15616/fdinfo/4': No such file or directory
du: cannot access '/proc/15616/fd/4': No such file or directory
du: cannot access '/proc/15616/fdinfo/4': No such file or directory
0   /proc
28K /root
8.6M    /run
14M /sbin
8.0K    /snap 
8.0K    /srv
0   /sys
64K /tmp
4.7G    /usr
1.5G    /var
0   /vmlinuz
0   /vmlinuz.old
These 2 steps add an extra hard drive to your EC2 and format it for use: Attach an extra hard drive (EBS: Elastic Block Storage) to an EC2. Format an EBS drive attached to an EC2.
Memory: EC2 instances don't have allocated swap space by default. Running out of memory can invoke the Linux Out Of Memory (OOM) manager. The OOM manager terminates processes, such as a database, web server, or the SSH service. Networking: Without networking, your system can't answer ARP requests from status checks.
When you run out of root filesystem space, and aren't doing anything that you know consumes space, then 99% of the time (+/- 98%) it's a logfile. Run this:
sudo du -s /var/log/* | sort -n
You'll see a listing of all of the sub-directories in /var/log (which is the standard logging destination for Linux systems), and at the end you'll probably see an entry with a very large number next to it. If you don't see anything there, then the next place to try is /tmp (which I'd do with du -sh /tmp since it prints a single number with "human" scaling). And if that doesn't work, then you need to run the original command on the root of the filesystem, /* (that may take some time).
Assuming that it is a logfile, then you should take a look at it to see if there's an error in the related application. If not, you may just need to learn about logrotate.
/dev/xvda1 is your root volume. The AMI you listed has a default root volume size of 20GB as you can see here:
aws ec2 describe-images --image-ids ami-3b0c205e --region us-east-2 | jq .Images[].BlockDeviceMappings[]
{
  "DeviceName": "/dev/sda1",
  "Ebs": {
    "Encrypted": false,
    "DeleteOnTermination": true,
    "VolumeType": "gp2",
    "VolumeSize": 20,
    "SnapshotId": "snap-03341b1ff8ee47eaa"
  }
}
{
  "DeviceName": "/dev/sdb",
  "VirtualName": "ephemeral0"
}
{
  "DeviceName": "/dev/sdc",
  "VirtualName": "ephemeral1"
}
root@ip-10-100-0-64:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            488M     0  488M   0% /dev
tmpfs           100M  3.1M   97M   4% /run
/dev/xvda1       20G  9.3G   11G  49% /
tmpfs           496M     0  496M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           496M     0  496M   0% /sys/fs/cgroup
tmpfs           100M     0  100M   0% /run/user/1000
It appears the issue here is the instance was launched with 10GB (somehow, I didn't think this was possible) of storage instead of the default 20GB.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With