3

I currently have 9x1TB disks in RAID5, which to me is 8TB worth of storage. However, I am not getting that at all. This was after moving from RAID6 to RAID5 and performing the necessary commands to resize the filesystem.

mdadm --detail /dev/md0

/dev/md0:
        Version : 1.2
  Creation Time : Sun Apr  8 18:20:33 2012
     Raid Level : raid5
     Array Size : 7804669952 (7443.11 GiB 7991.98 GB)
  Used Dev Size : 975583744 (930.39 GiB 999.00 GB)
   Raid Devices : 9
  Total Devices : 9
    Persistence : Superblock is persistent

    Update Time : Tue Dec 10 10:15:08 2013
          State : clean
 Active Devices : 9
Working Devices : 9
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : ares:0  (local to host ares)
           UUID : 97b392d0:28dc5cc5:29ca9911:24cefb6b
         Events : 995494

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
       5       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1
       4       8       81        4      active sync   /dev/sdf1
       9       8      113        5      active sync   /dev/sdh1
      11       8       97        6      active sync   /dev/sdg1
       6       8      145        7      active sync   /dev/sdj1
      10       8      129        8      active sync   /dev/sdi1

df -h

Filesystem Size  Used Avail Use% Mounted on
/dev/md0   7.2T  2.1T  4.8T  31% /mnt/raid

Is this normal and what I should expect or am I doing something wrong?

Steven Lu
  • 165
  • 1
  • 6
  • 3
    RAID5 is a terrible idea for an array that size. You should be using RAID6. If a drive fails or experiences an Unrecoverable Read Error, the chances of a second drive also failing before you swap the drive and complete the rebuild are not insignificant. – longneck Dec 10 '13 at 15:25
  • @longneck I'm not running anything that's mission critical. In fact, I'm doing quite the opposite and all essential data is backed up offsite. From what I remember, the failure rate for RAID5 is 0.07% and RAID6 is 0.00001%. Not too significant right? – Steven Lu Dec 10 '13 at 15:52
  • The failure rate is based on MTBF and URE frequency for the drives, and will therefor vary based on configuration. It can't be boiled down to a single, universal percentage. – longneck Dec 10 '13 at 16:43

1 Answers1

5

This is the age-old binary vs. denary kilo/mega/giga/terabytes issue.

Note the line

Array Size : 7804669952 (7443.11 GiB 7991.98 GB)

So whilst your array size is 7991.98 GB, using denary GB - that's pretty much exactly 8*1TB - using binary GiB it's 7443.11 GiB. Dividing by 2^10 again gives 7.27TiB, then losing about 1.5% to FS overhead takes us to 7.16TiB, or 7.2 with rounding, which is exactly what df is reporting.

To see a more detailed analysis of a similar array, including a justification for that "1.5%" figure, read my answer here

MadHatter
  • 79,770
  • 20
  • 184
  • 232