0

I have 512GB attached to my linux centos 7.9 server I'm trying to know how much total disk size used from overall disk size from inside the server

I tried to use df -h --total command to show the total disk and used percentage but it shows 224GB total disk size and 13% used which is wrong because

In Azure monitoring it's showing 76% from the space used can anyone help with that ?

I tried many commands like fdisk, lsblk,parted.. etc but no accurate results

the full output of "df -h --total":

df -h --total
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs         16G     0   16G   0% /dev
tmpfs            16G     0   16G   0% /dev/shm
tmpfs            16G  136M   16G   1% /run
tmpfs            16G     0   16G   0% /sys/fs/cgroup
/dev/sda2        30G   25G  5.8G  81% /
/dev/sdb1       126G  4.1G  116G   4% /mnt/resource
shm              64M     0   64M   0% 
total           224G   29G  189G  14% -

lsblk:

NAME   FSTYPE LABEL       UUID                                 MOUNTPOINT    NAME    SIZE OWNER GROUP MODE
sda                                                                          sda      30G root  disk  
├─sda1                                                                       ├─sda1    1M root  disk  
└─sda2 xfs    centos_root 425e9325-f7cd-4d90-8548-4a79e37eb5b6 /             └─sda2   30G root  disk  
sdb                                                                          sdb     128G root  disk  
└─sdb1 ext4               6242553c-4d61-4420-b149-b2a3cb52c912 /mnt/resource └─sdb1  128G root  disk  
sdc                                                                          sdc     512G root  disk  
vidarlo
  • 6,654
  • 2
  • 18
  • 31

2 Answers2

1

The 512GB disk is /dev/sdc, and it's not mounted in your OS, and thus not included in the total shown by df -h.

vidarlo
  • 6,654
  • 2
  • 18
  • 31
0

I think that your Azure monitoring doesn't have a agent reporting on the df disk usage as seen by your VM, but reports on the storage consumption as reported by the storage layer, hence the discrepancy.

Most cloud providers use some form of thin provisioning when assigning storage.

So when you assign a a 512 GB virtual disk to a VM, your VM sees 512GB available, but the actual storage that gets consumed will initially be much closer to 0 GB rather than the allocated 512GB. The empty disk space is not allocated (yet) in the backend, and only once you start writing data to that disk will the actual disk consumption, as measured in the back-end, increase.

In other words: after writing a 100GB to that disk, running df will show 100GB used inside the VM and looking from the storage back-end you will also see that 100GB is used.

When you delete 80 GB of your files something interesting can happen:
running df will show only 20GB remains used inside the VM but looking from the storage back-end it will report that there is still 100GB in use. That is because some/many storage back-ends can't reclaim thin-provisioned storage once it has been allocated. Allocated storage can only be reclaimed when the complete virtual disk gets deleted but it can't reclaim storage when the VM deletes files/data on the virtual disks.

diya
  • 1,771
  • 3
  • 14