2

So I am not sure why I am having this issue, so I hope someone can see something I am missing.

I created a kickstart file for a test Cent OS 7 automated install. Nothing seems to generate a warning except for the storage portion when it comes to partitioning. This is that section:

clearpart --all --initlabel --drives=/dev/sda,/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdf,/dev/sdg,/dev/sdh

part raid.1 --size=1024 --ondisk=/dev/sda
part raid.2 --size=1024 --ondisk=/dev/sdb
part raid.3 --size=1024 --ondisk=/dev/sdc
part raid.4 --size=1024 --ondisk=/dev/sdd
part raid.5 --size=1024 --ondisk=/dev/sde
part raid.6 --size=1024 --ondisk=/dev/sdf
part raid.7 --size=1024 --ondisk=/dev/sdg
part raid.8 --size=1024 --ondisk=/dev/sdh

part raid.9 --size=256 --ondisk=/dev/sda
part raid.10 --size=256 --ondisk=/dev/sdb
part raid.11 --size=256 --ondisk=/dev/sdc
part raid.12 --size=256 --ondisk=/dev/sdd
part raid.13 --size=256 --ondisk=/dev/sde
part raid.14 --size=256 --ondisk=/dev/sdf
part raid.15 --size=256 --ondisk=/dev/sdg
part raid.16 --size=256 --ondisk=/dev/sdh

part raid.17 --size=20480 --ondisk=/dev/sda
part raid.18 --size=20480 --ondisk=/dev/sdb
part raid.19 --size=20480 --ondisk=/dev/sdc
part raid.20 --size=20480 --ondisk=/dev/sdd
part raid.21 --size=20480 --ondisk=/dev/sde
part raid.22 --size=20480 --ondisk=/dev/sdf
part raid.23 --size=20480 --ondisk=/dev/sdg
part raid.24 --size=20480 --ondisk=/dev/sdh

raid /boot --fstype="xfs" --device=boot --level=10 raid.1 raid.2 raid.3 raid.4 raid.5 raid.6 raid.7 raid.8
raid /boot/efi --fstype="efi" --device=boot_efi --level=10 raid.9 raid.10 raid.11 raid.12 raid.13 raid.14 raid.15 raid.16
raid pv.1 --fstype="lvmpv" --device=root --level=10 raid.17 raid.18 raid.19 raid.20 raid.21 raid.22 raid.23 raid.24

volgroup vg1 pv.1

logvol / --fstype="xfs" --size=1 --grow --name=root --vgname=vg1

bootloader --append=" crashkernel=auto" --location=mbr

I am trying to create three partitions:

  • /boot - 1024 MiB size, formatted to xfs, RAID 10
  • /boot/efi - 256 MiB size, formatted to efi, RAID 10
  • / - 20 GiB size, formatted to xfs, RAID 10 + LVM

I am using the graphical install so I can look at everything quickly, it looks like its marking /boot/efi as efi, yet regardless I still get the below error preventing me from completing the installation.

No valid boot loader target device found. See below for details. For a UEFI installation you must include a EFI System Partition on a GPT-formatted disk, mounted at /boot/efi.

The other weirdness I am seeing is that its not using my values for the premade partition sizes. Based on the kickstart file I wrote above these are the sizes I am seeing:

  • /boot - should be 1024 MiB, CentOS 7 makes it 4092 MiB
  • /boot/efi - should be 256 MiB, CentOS 7 makes it 1020 MiB
  • / - should be 20 GiB, Cent OS 7 makes it 79.93 GiB

I would appreciate any assistance on this.

Alex Mikhaelson
  • 107
  • 1
  • 2
  • 7

1 Answers1

1

Your sizes seem exactly what they are supposed to be given the part commands . Your first partition on each device is 1024 you have 8 devices in a RAID10, so that is 1024 * 8 / 2 or 4096. For RAID10 the size of the volume is the number of active devices X maximum size of smallest member / 2.

I highly doubt a software RAID10 is valid for a EFI partition, and it unless something has changed it isn't going to be valid for your /boot partition either. I suspect your only choice for that is a simple RAID1 volumes. It is valid to have a RAID1 volume that spans 8 devices. So you could try changing your boot / efi over to RAID1. With RAID1 the of the volume will just be the size of the smallest active member.

Zoredache
  • 130,897
  • 41
  • 276
  • 420
  • Good catch on the RAID 10 sizes. I didn't realize that it was multiple. – Alex Mikhaelson Jan 21 '17 at 00:54
  • Also, @Zoredache I just restarted the install with your suggestion to use RAID1 and not RAID10, and it seems to work. Thanks for catching that, I don't fully understand why RAID1 works, but RAID10 does not? Is that because of how Linux does RAID10? – Alex Mikhaelson Jan 21 '17 at 01:27
  • 1
    RAID 10 is fine for /boot, but it's correct that you can't use such a thing for the ESP volume. Use a RAID 1 for that. Nothing else in this situation requires that. You might also consider not separating / and /boot, since it's all going to the same place (with the same FS type) anyways. This is caused by the way your motherboard reads the ESP volumes. It finds the first intact one and reads form that. However, it expects to find an intact ESP on a single disk. RAID 1 makes exact copies to all disks, so that's the mechanism to use here. – Spooler Jan 23 '17 at 02:22
  • Thanks @SmallLoanOf1M. I realized after I wrote my question that if I was trying to find the first bootable partition in a software raid setup, I should take into account that I wouldn't know which disk it would be on initially in a RAID 10 setup. Your answer confirmed this. I appreciate it! – Alex Mikhaelson Jan 24 '17 at 00:01