0

I have searched serverfault, SO and other sites but could not find a clear answer. I have done some reading on the basics of Linux storage and filesystems, but I'm still unclear about how to solve my problem.

My aim is to do a simple assessment of disk space and usage across servers in our environment. We will run a bash script that includes df -k command on each server. The output text will be gathered for parsing and analysis. I'm having trouble understanding how to assess the df -k output correctly to arrive at the total disk space and usage.

For now, we are ignoring networked and LVM mapped storage (though I suspect they will be more involved and complicated than this situaion). I'll deal with these in the near future. For now, I'm having trouble understanding simple scenarios.

Scenario 1: I created an Oracle Linux 7.9 VM in Oracle Cloud with a default boot volume of 46GB. The df -h output returned the following:

Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        7.6G     0  7.6G   0% /dev
tmpfs           7.6G     0  7.6G   0% /dev/shm
tmpfs           7.6G  8.7M  7.6G   1% /run
tmpfs           7.6G     0  7.6G   0% /sys/fs/cgroup
/dev/sda3        39G  2.8G   36G   8% /
/dev/sda1       200M  7.4M  193M   4% /boot/efi
tmpfs           1.6G     0  1.6G   0% /run/user/0
tmpfs           1.6G     0  1.6G   0% /run/user/994
tmpfs           1.6G     0  1.6G   0% /run/user/1000

Question 1: What consistent logic could I apply to calculate the total disk space and usage? In this case, I can see that (sda3 + one entry of 7.6GB tmpfs) will get me to 46GB, so should I ignore subsequent 7.6G tmpfs entries and all 1.6G tmpfs entries? Should I simply ignore all tmpfs entries, given that tmpfs is volatile and not real storage? In which case, how would I arrive at 46GB total storage?

Scenario 2: I created an Oracle Linux 7.9 VM in Oracle Cloud with a default boot volume of 200GB. The df -h output returned the following:

Filesystem      Size  Used Avail Use% Mounted on
devtmpfs         30G     0   30G   0% /dev
tmpfs            30G     0   30G   0% /dev/shm
tmpfs            30G  8.8M   30G   1% /run
tmpfs            30G     0   30G   0% /sys/fs/cgroup
/dev/sda3        39G  3.5G   35G   9% /
/dev/sda1       200M  7.4M  193M   4% /boot/efi
tmpfs           5.9G     0  5.9G   0% /run/user/0
tmpfs           5.9G     0  5.9G   0% /run/user/994
tmpfs           5.9G     0  5.9G   0% /run/user/1000

Question 2: This is more confusing. How would I arrive at 200GB total disk size? It appears I'd need to count sdaX AND (all tmpfs entries) to get to 200GB. I'm having trouble finding a consistent logic for both scenarios.

I hope my questions are clear. I'd be glad to provide any additional details and/or clarifications.

rc1
  • 103
  • 2
  • Have you tried running `fdisk`? – Michael Hampton Jul 07 '21 at 00:45
  • Just learned about fdisk based on the answer below. The only issue is that it's not compatible with Solaris (fdisk -l does not work), so we will need to figure out slightly different approaches for non-Linux environments. df -k returns consistent outputs across all Unix/Linux, but it's clearly insufficient on its own. Thanks. – rc1 Jul 07 '21 at 01:44
  • Adding update for anyone else dealing with the same issue. I used "fdisk -l" on Linux (as suggested in the answer below), and "iostat -En" in Solaris 10/11 , followed by some careful parsing. Combined with df -k + (fdisk OR iostat) gives me pretty good coverage on overall disk size and utilization. I have not tested any of the above on AIX. – rc1 Jul 07 '21 at 02:13

1 Answers1

3

tmpfs filesystems are RAM disks and have nothing to do with your storage media.

In the second scenario, your 200GB disk contains a 39GB root filesystem, a 200MB boot filesystem, possibly swap space of unknown size and probably a large amont of free space.

To see swap space, run swapon -s. To see free space, use a partitioning tool like fdisk -l or parted /dev/sda print.

You don't seem to be using filesystems on LVM volumes or network storage. They would be listed by df.

berndbausch
  • 1,033
  • 8
  • 12
  • Thanks for the clarification. So in short - if I understand this correctly - we cannot accurately determine the total disk size using just "df", and must incorporate "fdisk -l" in the data gathering and analysis? Or is there some practical estimation that can be done with just the df command? Thanks. – rc1 Jul 07 '21 at 00:41
  • Your understanding is correct. `df` only reports the space used in mounted filesystems. It does not report unmounted filesystems, partitions without filesystems or unpartitioned space. – berndbausch Jul 07 '21 at 00:44