I've recently grown a 5x 3tb mdadm raid6 array (8tb) with a sixth disc in fedora 18, and after full rebuild and check, "mdadm --detail /dev/md127" returns the below:
Version : 1.2
Creation Time : Sun Feb 10 22:01:32 2013
Raid Level : raid6
Array Size : 11720534016 (11177.57 GiB 12001.83 GB)
Used Dev Size : 2930133504 (2794.39 GiB 3000.46 GB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sun Jul 21 17:31:32 2013
State : active
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : ubuntu:tercore
UUID : f52477e1:ded036fa:95632986:dcb84e51
Events : 326236
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 8 33 2 active sync /dev/sdc1
4 8 49 3 active sync /dev/sdd1
5 8 80 4 active sync /dev/sdf
6 8 64 5 active sync /dev/sde
All good.
I then ran "cat /proc/mdstat" which returned the below:
Personalities : [raid6] [raid5] [raid4]
md127 : active raid6 sde[6] sdf[5] sda1[0] sdb1[1] sdd1[4] sdc1[2]
11720534016 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
bitmap: 0/22 pages [0KB], 65536KB chunk
unused devices: <none>
Also fine.
However when running "df -h" I get the below, which incorrectly reports the old raid capacity:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 922M 0 922M 0% /dev
tmpfs 939M 140K 939M 1% /dev/shm
tmpfs 939M 2.6M 936M 1% /run
tmpfs 939M 0 939M 0% /sys/fs/cgroup
/dev/mapper/fedora_faufnir--hp-root 26G 7.2G 17G 30% /
tmpfs 939M 20K 939M 1% /tmp
/dev/sdg1 485M 108M 352M 24% /boot
/dev/md127 8.2T 7.6T 135G 99% /home/teracore
Can anyone help me fix this mismatch? It's also naturally causing samba to report the incorrect array capacity via my windows laptop too.
Many thanks in advance! Will.