1

I've just set up a RAID 10 on an old server with 4 hard drives with the centos7 installer, but I am very confused about the result

Here's the output of /proc/mdstat

Personalities : [raid10] [raid1]
md123 : active raid1 sda4[0] sdb4[1] sdc4[3] sdd4[2]
      1049536 blocks super 1.0 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md124 : active raid10 sda1[0] sdb1[1] sdd1[2] sdc1[3]
      838860800 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
     bitmap: 0/7 pages [0KB], 65536KB chunk

md125 : active raid1 sda3[0] sdb3[1] sdc3[3] sdd3[2]
     1048576 blocks super 1.2 [4/4] [UUUU]
     bitmap: 0/1 pages [0KB], 65536KB chunk

md126 : active raid10 sda2[0] sdb2[1] sdc2[3] sdd2[2]
     16793600 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]

md127 : active raid10 sdb5[1] sda5[0] sdc5[3] sdd5[2]
     116574208 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
     bitmap: 1/1 pages [4KB], 65536KB chunk

I'm not sure if he created a proper RAID 10. Can someone explain what's wrong with this?

If you don't see a problem, I guess the centos installer confused me. I was forced to choose a RAID 1 for the /boot partition and the /boot/efi one. So I was wondering where are those partitions on the disks and if I will be able to boot in case of disk failure

Here is the lsblk output

NAME      MAJ:MIN RM   SIZE RO TYPE   MOUNTPOINT
sda         8:0    0 465,8G  0 disk
├─sda1      8:1    0 400,1G  0 part
│ └─md124   9:124  0   800G  0 raid10 /data
├─sda2      8:2    0     8G  0 part
│ └─md126   9:126  0    16G  0 raid10 [SWAP]
├─sda3      8:3    0     1G  0 part
│ └─md125   9:125  0     1G  0 raid1  /boot
├─sda4      8:4    0     1G  0 part
│ └─md123   9:123  0     1G  0 raid1  /boot/efi
└─sda5      8:5    0  55,6G  0 part
  └─md127   9:127  0 111,2G  0 raid10 /
sdb         8:16   0 465,8G  0 disk
├─sdb1      8:17   0 400,1G  0 part
│ └─md124   9:124  0   800G  0 raid10 /data
├─sdb2      8:18   0     8G  0 part
│ └─md126   9:126  0    16G  0 raid10 [SWAP]
├─sdb3      8:19   0     1G  0 part
│ └─md125   9:125  0     1G  0 raid1  /boot
├─sdb4      8:20   0     1G  0 part
│ └─md123   9:123  0     1G  0 raid1  /boot/efi
└─sdb5      8:21   0  55,6G  0 part
  └─md127   9:127  0 111,2G  0 raid10 /
sdc         8:32   0 465,8G  0 disk
├─sdc1      8:33   0 400,1G  0 part
│ └─md124   9:124  0   800G  0 raid10 /data
├─sdc2      8:34   0     8G  0 part
│ └─md126   9:126  0    16G  0 raid10 [SWAP]
├─sdc3      8:35   0     1G  0 part
│ └─md125   9:125  0     1G  0 raid1  /boot
├─sdc4      8:36   0     1G  0 part
│ └─md123   9:123  0     1G  0 raid1  /boot/efi
└─sdc5      8:37   0  55,6G  0 part
  └─md127   9:127  0 111,2G  0 raid10 /
sdd         8:48   0 465,8G  0 disk
├─sdd1      8:49   0 400,1G  0 part
│ └─md124   9:124  0   800G  0 raid10 /data
├─sdd2      8:50   0     8G  0 part
│ └─md126   9:126  0    16G  0 raid10 [SWAP]
├─sdd3      8:51   0     1G  0 part
│ └─md125   9:125  0     1G  0 raid1  /boot
├─sdd4      8:52   0     1G  0 part
│ └─md123   9:123  0     1G  0 raid1  /boot/efi
└─sdd5      8:53   0  55,6G  0 part
  └─md127   9:127  0 111,2G  0 raid10 /
sr0        11:0    1  1024M  0 rom
Michael Hampton
  • 244,070
  • 43
  • 506
  • 972
Clément F.
  • 11
  • 1
  • 3
  • I don't see a problem. All of your RAID arrays look correct. Do you have a specific question? – Michael Hampton Mar 19 '18 at 20:45
  • md123 and md125 are your boot partitions. Unlike a hardware raid, they must be contained on the same disk in a software raid (i.e. can't span disks) otherwise the BIOS would have a hard time accessing them. Regarding booting in case of a failure, you may find this helpful: https://unix.stackexchange.com/questions/230349/how-to-correctly-install-grub-on-a-soft-raid-1 (not sure if the Centos installer handles this for you) – Brandon Xavier Mar 19 '18 at 23:40
  • @BrandonXavier That seems a confusing way of putting it. md123 through md127 are all separate raid arrays. From the sounds of it /boot and /boot/efi are separate raid arrays for some unusual reason and based on the limited info given, they are likely to be on the raid arrays md123 and md125, which are both 4 disk raid1 arrays. Also it is GRUB that accesses the /boot partition/raid, not BIOS. BIOS only loads GRUB from the first sector of a single disk. This is why it's a good idea to install grub on every disk. – BeowulfNode42 Mar 20 '18 at 01:31
  • @BeowolfNode42 You are correct - I should have said it was grub rather than BIOS that would having trouble accessing a boot partition spanning physical disks in a software raid. And yes, it is unusual and maybe a little awkward to have an individual raid for each partition - but not invalid as far as I'm aware. – Brandon Xavier Mar 20 '18 at 10:38
  • I think the `lsblk` output answers most of your questions, no? – Michael Hampton Mar 20 '18 at 21:41

2 Answers2

1

Yes, It is correct. As you can see from

# cat /proc/mdstat

The boot is RAID1

md125 : active raid1 sda3[0] sdb3[1] sdc3[3] sdd3[2] 1048576 blocks super 1.2 [4/4] [UUUU] bitmap: 0/1 pages [0KB], 65536KB chunk

And the rest are RAID10

So basically you can find out with issuing # cat /proc/mdstat

Hannan
  • 29
  • 1
1

When using software RAID, the preferred approach is generally as the one you have, which is to partition the drives, then create several RAID arrays with partitions from the different drives.

It's of course possible to create a single array from the raw, unpartitioned drives, then partition the resulting RAID array. However many tools, and particularly bootloaders, will have a hard time working properly in such set-ups.

wazoox
  • 6,918
  • 4
  • 31
  • 63