I've been working with several Linux EC2s over the years.
If you've ever run into your Linux EC2 server not behaving properly, one of the first things I would check is the DISK usage.
The disks for most "debug" servers are typically minimal, and can easily fill up.
1. Check your disk usage/space using df -h
Typically look for whatever is mounted to "root" or /
.
root@ip-10-75-3-227:/home/ubuntu# df -h
Filesystem Size Used Avail Use% Mounted on
udev 473M 0 473M 0% /dev
tmpfs 98M 11M 87M 11% /run
/dev/xvda1 7.7G 7.7G 0 100% /
tmpfs 488M 0 488M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 488M 0 488M 0% /sys/fs/cgroup
/dev/loop0 56M 56M 0 100% /snap/core18/2785
/dev/loop1 117M 117M 0 100% /snap/core/14946
/dev/loop2 119M 119M 0 100% /snap/core/15511
/dev/loop3 25M 25M 0 100% /snap/amazon-ssm-agent/6312
/dev/loop4 25M 25M 0 100% /snap/amazon-ssm-agent/6563
/dev/loop5 56M 56M 0 100% /snap/core18/2745
tmpfs 98M 0 98M 0% /run/user/1000
I know this can be a lot to look at, so let me draw your attention to the problematic spot:
/dev/xvda1 7.7G 7.7G 0 100% /
100% used space and 7.7G out of 7.7G used.
It's full...
2. Reclaim enough space to be able to perform commands.
Using Ubuntu, apt-get clean
(sudo yum clean all
on Debian systems)
(sudo dnf clean all
on Amazon Linux 2023+)
root@ip-10-75-3-227:/home/ubuntu# df -h
Filesystem Size Used Avail Use% Mounted on
udev 473M 0 473M 0% /dev
tmpfs 98M 716K 97M 1% /run
/dev/xvda1 7.7G 7.6G 156M 99% /
tmpfs 488M 0 488M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 488M 0 488M 0% /sys/fs/cgroup
/dev/loop0 56M 56M 0 100% /snap/core18/2785
/dev/loop1 117M 117M 0 100% /snap/core/14946
/dev/loop2 119M 119M 0 100% /snap/core/15511
/dev/loop3 25M 25M 0 100% /snap/amazon-ssm-agent/6312
/dev/loop4 25M 25M 0 100% /snap/amazon-ssm-agent/6563
/dev/loop5 56M 56M 0 100% /snap/core18/2745
tmpfs 98M 0 98M 0% /run/user/1000
3. Scale the disk in the AWS Console
EC2 > Volumes > Modify
In our case, we're going from 8GiB to 10GiB.
4. Check the block to expand
Use lsblk
to list blocks out.
root@ip-10-75-3-227:/home/ubuntu# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 55.7M 1 loop /snap/core18/2785
loop1 7:1 0 116.8M 1 loop /snap/core/14946
loop2 7:2 0 118.2M 1 loop /snap/core/15511
loop3 7:3 0 24.4M 1 loop /snap/amazon-ssm-agent/6312
loop4 7:4 0 24.8M 1 loop /snap/amazon-ssm-agent/6563
loop5 7:5 0 55.7M 1 loop /snap/core18/2745
xvda 202:0 0 8G 0 disk
ββxvda1 202:1 0 8G 0 part /
In this case xvda
and partition 1
is "root" for us.
5. Scale the disk using the following two commands
First, grow the partition,
growpart xvda 1
Lastly, resize the file system to use the new space,
resize2fs /dev/root
6. Confirm with df -h
we're done!
root@ec2:/home/ubuntu# df -h
Filesystem Size Used Avail Use% Mounted on
udev 473M 0 473M 0% /dev
tmpfs 98M 776K 97M 1% /run
/dev/xvda1 9.7G 3.7G 6.0G 38% /
tmpfs 488M 0 488M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 488M 0 488M 0% /sys/fs/cgroup
/dev/loop1 26M 26M 0 100% /snap/amazon-ssm-agent/5656
/dev/loop0 56M 56M 0 100% /snap/core18/2679
/dev/loop2 106M 106M 0 100% /snap/core/16202
/dev/loop3 117M 117M 0 100% /snap/core/14784
/dev/loop4 56M 56M 0 100% /snap/core18/2708
/dev/loop5 25M 25M 0 100% /snap/amazon-ssm-agent/6312
tmpfs 98M 0 98M 0% /run/user/1000
/dev/xvda1 9.7G 3.7G 6.0G 38% /
That's all!
Top comments (0)