1

I have a server with two LUN's mounted from a local SAN. I have a configuration file in place for the vendor software we're using (splunk) that defined the size of the second LUN, but I had accidentally configured it as 6GB larger than it actually was. This morning I came in to see whistles going off about the error. It has been fixed, and the Splunk server process has been restarted to make use of it. It should be clearing out data, and it appears to be doing so. However When i look at the output of DF, I see something weird:

Filesystem            Size  Used Avail Use% Mounted on
/dev/cciss/c0d0p3     507G  4.0G  477G   1% /
/dev/cciss/c0d0p1      97M   19M   73M  21% /boot
tmpfs                  36G     0   36G   0% /dev/shm
/dev/mapper/hot_group-lvol0
                      148G  128G   14G  91% /splunk/hot
/dev/mapper/cold_group-lvol0
                      837G  797G     0 100% /splunk/cold

As you can see DF is showing that the total size of the disk is significantly larger (40GB than the used disk. However it still shows 0 bytes available. Can anyone explain this?

Matthew
  • 2,737
  • 8
  • 35
  • 51

1 Answers1

4

That space is reserved for use by root. You can adjust the size of the reserved area using tune2fs /dev/mapper/cold_group-lvol0 -m 1 (assuming it's ext2/3/4) to set it to 1% instead of the default 5%. I think the filesystem needs to be unmounted in order to change it.

DerfK
  • 19,493
  • 2
  • 38
  • 54
  • This appears to have worked perfectly! I set it to 0% (there's no reason i'd ever be chrooted to that lun). Only one last question - are those changes persistent or do I need to put that in rc.local? – Matthew Aug 07 '12 at 15:56
  • 1
    Permanent changes to the filesystem itself – DerfK Aug 07 '12 at 17:44