1

My older 2 CPU server (32-bit LV Xeon, centos 5) drives a raid6 array. Since I grew it from 7x2T to 8x2T (I had to CD boot a "more modern" fedora to even do the 82-hour growing, but that's another story) LVM only recognizes 10.92TB of /dev/md0's 12TB, so that I end up with less than a 10 TB file system:

[root@svr ~]# mdadm --detail /dev/md0; pvdisplay /dev/md0; fdisk -l /dev/md0; df -k

/dev/md0:
        Version : 0.90
  Creation Time : Tue Dec 28 05:26:04 2010
     Raid Level : raid6
     Array Size : 11721071616 (11178.09 GiB 12002.38 GB)
  Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
   Raid Devices : 8
  Total Devices : 8
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Mon May 28 17:53:18 2012
          State : clean
 Active Devices : 8
Working Devices : 8
 Failed Devices : 0
  Spare Devices : 0

     Chunk Size : 64K

           UUID : 6de03491:96f53423:b23fa12b:2f674132
         Events : 0.51806

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1
       4       8       81        4      active sync   /dev/sdf1
       5       8       97        5      active sync   /dev/sdg1
       6       8      129        6      active sync   /dev/sdi1
       7       8      113        7      active sync   /dev/sdh1
  --- Physical volume ---
  PV Name               /dev/md0
  VG Name               vg0
  PV Size               10.92 TB / not usable 2.75 MB
  Allocatable           yes (but full)
  PE Size (KByte)       4096
  Total PE              2861589
  Free PE               0
  Allocated PE          2861589
  PV UUID               KSZAHI-oldy-hpmS-gCk9-9a6z-psGT-2Ui6L8

Disk /dev/md0: 12002.3 GB, 12002377334784 bytes
2 heads, 4 sectors/track, -1364699392 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md0 doesn't contain a valid partition table

Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                      75449724  59475364  12079884  84% /
/dev/sda1               101086     62152     33715  65% /boot
tmpfs                  5131380         0   5131380   0% /dev/shm
/dev/mapper/vg0-Movies
                     9767428096 8968488468 798939628  92% /mnt/Movies

[root@svr ~]# uname -a
Linux svr.gheiden.com 2.6.18-308.1.1.el5PAE #1 SMP Wed Mar 7 04:57:46 EST 2012 i686 i686 i386 GNU/Linux

Is there some 32-bit limit to LVM or xfs to produce those 9.8 TB? Would repartitioning the discs help so 2 devices < 10TB are created? Right now I can't do that, no room to backup the data ...

added:

Centos 5.8

[root@svr ~]# xfs_info /mnt/Movies/
meta-data=/dev/vg0/Movies        isize=256    agcount=41, agsize=61047232 blks
         =                       sectsz=512   attr=0
data     =                       bsize=4096   blocks=2441889792, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=1
naming   =version 2              bsize=4096
log      =internal               bsize=4096   blocks=32768, version=1
         =                       sectsz=512   sunit=0 blks, lazy-count=0
realtime =none                   extsz=4096   blocks=0, rtextents=0

[root@svr ~]# lvdisplay /dev/vg0/Movies
  --- Logical volume ---
  LV Name                /dev/vg0/Movies
  VG Name                vg0
  LV UUID                v1ilLf-nUe7-8grx-XcS1-EJXp-hPdl-mcRrb3
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                10.92 TB
  Current LE             2861589
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     1536
  Block device           253:2
  • What exact version of Centos are you using? Certainly there are issues with *any* RH5 variant and large volumes, all fixed in 6 obviously but that's no use to you. – Chopper3 May 28 '12 at 17:25
  • It sure seems to me that you are in an awfully bad position if you can't backup 8TB of data. What happens when you filesystem becomes corrupted... Anyway given that your `Array Size:11178.09 GiB`, and the size of your drives are `Dev Size:1863.01 GiB` an LVM size of 10.92TB is exactly what you would expect. Please remember some tools working on drives display sizes in powers of 10 instead of powers of 2. So the from the size fdisk reports for the array **12002377334784/(2**40)==10.91TiB** (where 1TiB is exactly 2**40) – Zoredache May 28 '12 at 20:01
  • BTW, Can you give us an lvdisplay for that volume group and some more detailed information about the filesystem properties? I don't know about XFS overhead. I believe the tool that dumps the fs info is `xfs_info`. – Zoredache May 28 '12 at 20:15
  • Thanks for the quick answer, I should have done the math before posting: PE 4096 x 2861589 results in 11721068544 kB which is close enough. The file system overhead is to be expected, so everything is as it should be. Still,here are the data you asked for: – struwwelpeter May 28 '12 at 22:19
  • lvdisplay gives the same numbers:` LV Size 10.92 TB Current LE 2861589 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 1536 Block device 253:2' – struwwelpeter May 28 '12 at 22:29
  • centos 5.8, only missing a few updates. – struwwelpeter May 28 '12 at 22:38
  • and finally xfs_info: meta-data=/dev/vg0/Movies isize=256 agcount=41, agsize=61047232 blks = sectsz=512 attr=0 data = bsize=4096 blocks=2441889792, imaxpct=25 = sunit=0 swidth=0 blks, unwritten=1 naming =version 2 bsize=4096 log =internal bsize=4096 blocks=32768, version=1 = sectsz=512 sunit=0 blks, lazy-count=0 realtime =none extsz=4096 blocks=0, rtextents=0 – struwwelpeter May 28 '12 at 22:41

0 Answers0