2

I have installed CentOS 5.7 64 Bits in my server which has 4x300 GB SAS drives on hardware RAID 10. At the installation, I chose default partitions.

Here are the commands output:

[root@server ~]# fdisk -l

Disk /dev/sda: 598.8 GB, 598879502336 bytes
255 heads, 63 sectors/track, 72809 cylinders
Units = cylinders de 16065 * 512 = 8225280 bytes

Boot Device         Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14       72809   584733870   8e  Linux LVM


[root@server ~]# df -h
File System          Size   Used  Free Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                      493G  1,4G  466G   1% /
/dev/sda1              99M   13M   81M  14% /boot
tmpfs                  24G     0   24G   0% /dev/shm

Where did that 100 GB go and how can I add it?

Thanks in advance

David Adders
  • 43
  • 1
  • 5
  • That "493G" you see there is actually 493GiB, which is equal to 530GB. So it's 70GB missing, not 100. – David Schwartz Jan 28 '12 at 03:55
  • 1
    you need to provide more information about how the LVM is set up. PE size, LE size, etc. (`lvdisplay`, `lvs`, `pvs`, etc) Between the difference between the actual binary size of the disk compared to the decimal size of the disk, and the overhead of lvm, your space is in there somewhere. Have you tried `lvresize` on your LV to see if you can grow it that other 100G? – Tim Kennedy Jan 28 '12 at 04:57

1 Answers1

11

Your question is somewhat interesting - in that there are actually 3 problems:

  1. A unit conversion problem
  2. Reserved blocks for the file system (visible by df)
  3. Reserved blocks for the file system (not visible by df)

Note: I am going to use numbers pulled from one of my systems, not your numbers, since more information is required - the same math applies though.

1. GB (gigabytes) vs. GiB (gibibytes)

  • A GB is based on powers of 10: 1GB = 1033 = 10003 = 1000000000 bytes
  • A GiB is based on powers of 2: 1GiB = 2103 = 10243 = 1073741824 bytes

Hard drives are almost always sold in GB.

fdisk -l /dev/xvda1

Disk /dev/xvda1: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

So, this partition has 4294967296 bytes - you can see that the number (MB) is simply 1/10002 of that. To convert to GiB, we will divide by 10243:

4294967296 B / (10243 B/GiB) = 4GiB (or 4096MB, definitely different than the number above).

2. The file system reserves some blocks that df can see

Firstly, let's find our block size:

dumpe2fs -h /dev/xvda1 | grep "Block size"
dumpe2fs 1.41.12 (17-May-2010)
Block size:               4096

Since the block size on this system is 4096B, I am using that value below. (Note, df uses 4K = 4096 and 4KB = 4000):

df -B 4K
Filesystem           4K-blocks      Used Available Use% Mounted on
/dev/xvda1             1032112    325035    696599  32% /

You would expect that Used + Available = Total (i.e. 4K-blocks), however:

325035+696599 = 1021634 =/= 1032112

This missing value is the 'Reserved block count':

dumpe2fs -h /dev/xvda1 | grep "Reserved block count"
dumpe2fs 1.41.12 (17-May-2010)
Reserved block count:     10478

Checking the math:

1021634 + 10478 = 1032112

3. The missing blocks that df can't see

Well, so far, so good, but the numbers still don't add up.

The total number of 4K-blocks I should have is 4294967296 / 4096 = 1048576 You can verify this with the output of dumpe2fs -h /dev/xvda1 | grep "Block count"

dumpe2fs -h /dev/xvda1 | grep "Block count"
dumpe2fs 1.41.12 (17-May-2010)
Block count:              1048576

So, according to fdisk and dumpe2fs there are 1048576 4K blocks According to df: 1032112 4K blocks Which means: 1048576 - 1032112 = 16464 blocks are missing

Here you need a bit of an understanding of the file system. In my case, I am using ext4 - and it is divided into groups.

To start, here is a partial output from dumpe2fs:

dumpe2fs -h /dev/xvda1
dumpe2fs 1.41.12 (17-May-2010)
Filesystem OS type:       Linux
Inode count:              262144
Block count:              1048576
Reserved block count:     10478
Free blocks:              735865
Free inodes:              216621
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      511
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
RAID stride:              32582
Flex block group size:    16
First inode:              11
Inode size:               256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
First orphan inode:       15632
Journal backup:           inode blocks
Journal features:         journal_incompat_revoke
Journal size:             128M
Journal length:           32768
Journal sequence:         0x0006db26
Journal start:            7391

There are 32768 Blocks per group. We have 1048576 Blocks in total Therefore: 1048576 / 32768 = 32 groups

If you run dumpe2fs (without the -h), you will get a long list of all the groups and their relevant information. For instance, for my first group:

Group 0: (Blocks 0-32767) [ITABLE_ZEROED]
  Checksum 0xdc79, unused inodes 0
  Primary superblock at 0, Group descriptors at 1-1
  Reserved GDT blocks at 2-512
  Block bitmap at 513 (+513), Inode bitmap at 529 (+529)
  Inode table at 545-1056 (+545)
  13296 free blocks, 0 free inodes, 1487 directories
  Free blocks: 10382, 11506-11537, 11672-11679, 11714-11727, 12169, 12173-12179, 12181-12185, 12938-12962, 12964-12969, 13105, 13217-13246, 13384-13390, 13392-13393, 13644-13647, 13707, 13712-13855, 16346-18395, 20442-22491, 22699-22701, 22748, 23053-31837, 32290-32408
  Free inodes:

You will notice a few things here:

  1. There is a superblock (not all groups have one) - 1 block
  2. There are group descriptors (only groups with superblocks have them) - 1 block
  3. There is a block bitmap (all groups have one) - 1 block
  4. There is an inode bitmap (all groups have one) - 1 block
  5. There is an inode table - 512 blocks (Inode blocks per group)

We can find a list of our superblocks with:

dumpe2fs /dev/xvda1 | grep -i superblock
dumpe2fs 1.41.12 (17-May-2010)
  Primary superblock at 0, Group descriptors at 1-1
  Backup superblock at 32768, Group descriptors at 32769-32769
  Backup superblock at 98304, Group descriptors at 98305-98305
  Backup superblock at 163840, Group descriptors at 163841-163841
  Backup superblock at 229376, Group descriptors at 229377-229377
  Backup superblock at 294912, Group descriptors at 294913-294913
  Backup superblock at 819200, Group descriptors at 819201-819201
  Backup superblock at 884736, Group descriptors at 884737-884737

So, in my case, 1 primary superblock and 7 backups.

Working this out, we get:

  • 24 groups without a superblock:
    • Each having 1 block (block bitmap) + 1 block (inode bitmap) + 512 blocks (inode table) = 514 blocks
  • 8 groups with a superblock:
    • Each having 1 block (superblock) + 1 block (group descriptors) + 1 block (block bitmap) + 1 block (inode bitmap) + 512 blocks (inode table) = 516 blocks

Doing the math, we find:

24 groups * 514 block/group + 8 groups * 516 blocks/group = 16464 blocks

Which exactly equals our missing number!

It is worth mentioning that there are additional reserved blocks (e.g. Reserved GDT blocks) that allow for future growth, but that df includes in its calculations. Also, the filesystem journal is recognized as used space by df (so even without any files, there would be 128MiB used)

cyberx86
  • 20,805
  • 1
  • 62
  • 81
  • That is not just a good answer, it's also a great tutorial. Thanks! – Tim Kennedy Jan 28 '12 at 15:04
  • Is there some way to calculate overhead of LVM (with setup similar to one of question author)? – myroslav Feb 21 '13 at 15:27
  • @myroslav: I can't access a server with LVM setup at the moment (so can't really test things out). I would advise asking a new question (link to this one if relevant), someone will be able to provide a useful answer. – cyberx86 Feb 21 '13 at 16:17