1

I have a server setup with a RAID 5 using (3) 500GB drives, 1 as a spare so unused in the RAID. So in my mind i start out with 990GB with the RAID 5 in place. When looking at DF or the built in disk space utility i only see a total of about 882GB, how can i find where the 100+GB went? How can i get it back?

I've checked the RAID 5 BIOS and i see all the space.

I've tried looking manually and through terminal commands with no luck.

Filesystem     -      1K-blocks   -   Used Available - Use% - Mounted on
/dev/mapper/vg_web-lv_root
                       838084192  48368700 747153060   7%      /
tmpfs                   12104644       592  12104052   1%     /dev/shm
/dev/sda1               495844      121546    348698   26%    /boot
/dev/mapper/vg_web-lv_home
                       82569904    259136  78116468      1%    /home

    Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_web-lv_root

                          800G   47G  713G   7%      /
tmpfs                      12G  592K   12G   1%      /dev/shm
/dev/sda1                 485M  119M  341M  26%      /boot
/dev/mapper/vg_web-lv_home
                           79G  254M   75G   1%      /home

More info for you, i am now positive i am not getting all my GB's

--- Physical volume ---
PV Name               /dev/sda2
VG Name               vg_web
PV Size               930.89 GiB / not usable 3.00 MiB

VGS

VG     #PV #LV #SN Attr   VSize   VFree 
vg_web   1   3   0 wz--n- 930.88g 13.29g

LVS

LV      VG     Attr      LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
lv_home vg_web -wi-ao---  80.00g                                             
lv_root vg_web -wi-ao--- 812.00g                                             
lv_swap vg_web -wi-ao---  25.59g       

After the above info i found through lvm and vgs, i already figured out that i have all my space accounted for

WifiGhost
  • 81
  • 1
  • 6
  • Could you post the output of `df -k`? TomTom's answer below is an excellent one, but there may be another factor or two in play. – MadHatter Oct 20 '13 at 08:02
  • will do when i get to that server, when looking at the RAID BIOS screen i see 1000GB listed. Per the below am i really loosing 100+GB of space due to rounding? – WifiGhost Oct 21 '13 at 03:56
  • would you like the output from parted as well? – WifiGhost Oct 21 '13 at 03:58
  • `df` will do for me. But it would be useful to know what model drives they are. – MadHatter Oct 21 '13 at 07:02
  • 500GB WD RE4 DF output will be in original post – WifiGhost Oct 22 '13 at 03:41
  • OK, thanks for that. I take it that second one is from `df -h` (*not* `df -H`)?. You didn't mention that LVM was also involved, could we see the output of `vgs` and `lvs` as well? – MadHatter Oct 22 '13 at 06:34
  • sure thing, will add to OP – WifiGhost Oct 23 '13 at 02:37
  • All the info you requested is the MadHatter – WifiGhost Oct 28 '13 at 03:43
  • Do you have 4 drives in a raid 5 with one spare? (So 3 drives actively participate and 1 does nothing) or 3 drives in a raid 5 with one spare? (2 drives participate with 1 doing nothing) If the latter, you really have a raid 0 with a spare, and that won't help you much in a drive failure... just curious. – Regan Oct 28 '13 at 14:02
  • Wifighost, just in case you don't know the etiquette: when you're satisfied with an answer to your question, you should accept it by clicking the "tick" outline next to it. That drives the SF reputation system both for you and the author of that answer. The community doesn't say that you have to be satisfied with any of the answers, it only asks that when one does satisfy you, you so accept it. – MadHatter Oct 29 '13 at 13:10

2 Answers2

8

Sorry, all your HDD space is accounted for.

Firstly, there's the conversion factor between denary (base-10) gigabytes (as used by drive manufacturers) and binary (base-2) gigabytes (as used by some Linux tools). A binary gigabyte (GiB) is 2^30 bytes, while a denary gigabyte (GB) is 10^9; the former is larger by about 7%. Secondly, there's a little unallocated space in the LVM vg, and there are the partitions that aren't your root. Finally, there's the 5% overhead that mkfs reserves for root.

Let's take it step by step. You think you should have 1000GB. That's 1000*10^9/2^30=931.3GiB, which is the honest size of your /dev/sda.

You lose 485MiB to /dev/sda1, the /boot partition. So that should leave you with 931.3-485/2*10=930.83GiB. That's almost exactly what pvs tells you is in the volume vg_web.

lvs then tells us that that 930.9GiB is allocated to three logical volumes (lv_root, lv_home, and lv_swap). There's 13.3GiB of space unallocated, which we'll come back to later.

Your root partition, which is the one that I suspect most concerns as you as it's where the majority of your space is, is an 812GiB volume. File systems have overhead; that is, structures that are the file system, not the data stored therein, and these take up disc space. They include the superblock, the copies of the superblock, the block entries, etc., all the metadata underlying the file system. This article attempts to quantify the size of the metadata, and says that ext2 uses about 1.6% of the space for FS overhead (it notes that ext3/4 are bigger, but adds that the overhead is entirely down to the size of the journal, which I suspect is much less significant on a 1TB FS; the article is written based on a 1GB test FS). 812 less 1.6% = 812*0.984 = 799.0GiB, which is almost exactly what df -h tells us we have in the / partition. The /home partition is affected by this, also.

Then there is the famous 5% reserved for the root user by default, which is why the total of the available and used columns is 760GiB (713+47); 800*0.95=760. The /home partition is affected by this, also.

What can you do about any of this? Mostly, nothing. HDD manufacturers will continue to use denary GB because it makes their drives look bigger; you need a swap partition; file systems have overhead; none of these is negotiable.

You can retune the root FS so that less than 5% is reserved; the tune2fs commmand will tell you how to do that; I wouldn't take it below 1%, myself. And you can expand the vg_root volume into that unallocated 13.3GiB; there are numerous articles telling you how to do this, so I won't cover it here. It's a bit of a faff, for 13GiB, but you may be feeling charged up by now.

So hopefully from this you can see that all your HDD space is accounted for.

MadHatter
  • 79,770
  • 20
  • 184
  • 232
  • 1
    Great answer with a quick introduction on filesystem usage! – pkhamre Oct 28 '13 at 07:20
  • 2
    Thank you, pkhamre! I've always just shrugged this issue off with "fs overhead, rounding, etc.", but decided it would be interesting to really run the numbers just this once! – MadHatter Oct 28 '13 at 07:24
  • I like the detail of the fs overhead percentages. I've gotten used to it as you have, I know when it doesn't look right by training, but never considered the actual percentages. Well done! – Regan Oct 28 '13 at 14:03
  • Iain, thanks for the bounty - but even more so for the kind (now invisible) comments thereon. The praise of the praiseworthy is above all rewards! – MadHatter Oct 29 '13 at 13:07
  • as listed in my OP after seeing LVM and VGS i figured out with LVS all my space is accounted for – WifiGhost Oct 30 '13 at 01:56
  • I'm glad we pointed you to a line of research that led you to the answer - it's always a better learning experience to solve your own problems. Nevertheless, you asked this question; it exists. Unanswered questions on SF hang around forever, and keep floating back to the top of the main page periodically, in the hope of getting an answer. Unless you feel that a better one may show up (and if you've solved your own problem and agree with us, that's unlikely), it would be both courteous and best practice to accept any one of the answers already posted. – MadHatter Oct 30 '13 at 08:56
1

with a RAID 5 using 500GB drives. So in my mind i start out with 990GB

Why? A 32 drive Raid 5 has significantly more. Oh, you mean 3 drives. Maybe you should say that.

i only see a total of about 882GB,

You are aware that the G means different things. OS love to see them as 1024*1024*1024. Drive Manufacturers like higher numbers, so they use the decimal 1000000000. This is a significant difference right there.

Also Raid controllers sometimes do not use all sectors. Hard discs may be smaller - so for easy replacement you round down the used space a little (down to full gb). If you start a Raid with 5 accidentally larger drives then you don't run into a problem later plugging in a replacement drive.

But mostly I think you run into the calculatory difference.

This is the difference between 1 billion and 1073741824 - as you can see that is about 73.7 million missing per giga.

MadHatter
  • 79,770
  • 20
  • 184
  • 232
TomTom
  • 51,649
  • 7
  • 54
  • 136
  • yes with 3 500GB the RAID BIOS shows 1000GB usable, but i know it will actually be under 1TB due to the HDD companies using rounded numbers. – WifiGhost Oct 21 '13 at 04:35