0

I'm running Centos 7.8 with mdadm v4.1

I have 4 NVMe (3.2TB each) configured in a RAID 10, so 50% usable space (6.4TB):

Personalities : [raid10] md0 : active raid10 nvme5n1p16 nvme4n1p15 nvme0n1p1[4] nvme3n1p1[3] nvme2n1p1[2] nvme1n1p11 6250967040 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU] bitmap: 0/47 pages [0KB], 65536KB chunk

I am trying to increase usable space by adding 2 more NVMe for this result : enter image description here

After executing the grow command, I have this :

/dev/md0:
Version : 1.2
Creation Time : Wed Sep 23 15:51:45 2020
Raid Level : raid10
Array Size : 6250967040 (5961.39 GiB 6400.99 GB)
Used Dev Size : 3125483520 (2980.69 GiB 3200.50 GB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent

 Intent Bitmap : Internal

   Update Time : Fri Sep 25 11:03:05 2020
         State : clean, reshaping
Active Devices : 6    Working Devices : 6
Failed Devices : 0
 Spare Devices : 0

        Layout : near=2
    Chunk Size : 512K

Consistency Policy : bitmap

Reshape Status : 5% complete
 Delta Devices : 2, (4->6)

          Name : db04:0  (local to host db04)
          UUID : a0d10c0a:fd5fb830:e986407d:5dca539b
        Events : 7983

Number   Major   Minor   RaidDevice State
   4     259        7        0      active sync set-A   /dev/nvme0n1p1
   1     259        6        1      active sync set-B   /dev/nvme1n1p1
   2     259        5        2      active sync set-A   /dev/nvme2n1p1
   3     259        2        3      active sync set-B   /dev/nvme3n1p1
   6     259       11        4      active sync set-A   /dev/nvme5n1p1
   5     259       10        5      active sync set-B   /dev/nvme4n1p1

The array size is still 6.4TB and not 9.6TB. It seems like it's doing 3 copies of the data.

Bastien974
  • 1,896
  • 12
  • 44
  • 62

1 Answers1

0

Turned out, the final array size of 9.6TB is effective and displayed only after the reshaping.

Bastien974
  • 1,896
  • 12
  • 44
  • 62