1

I have an issue with software raid i build over 2 expanders with disks loaded as JBOD. After every reboot of server, the naming of virtual drives changes and also the array numbers and logic are scattered. example:

NAME           MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sdu             65:64   0  2.7T  0 disk  
└─md112          9:112  0  8.2T  0 raid5 
  └─md104        9:104  0 24.6T  0 raid0 
sdv             65:80   0  2.7T  0 disk  
└─md127          9:127  0  8.2T  0 raid5 
  └─md105        9:105  0 24.6T  0 raid0 
sdw             65:96   0  2.7T  0 disk  
└─md108          9:108  0  8.2T  0 raid5 
  └─md105        9:105  0 24.6T  0 raid0 
sdx             65:112  0  2.7T  0 disk  
└─md122          9:122  0  8.2T  0 raid5 
  └─md102        9:102  0 24.6T  0 raid0 

reboot changes to, next reboot to something even else:

NAME           MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sdu             65:64   0  2.7T  0 disk  
└─md115          9:115  0  8.2T  0 raid5 
  └─md104        9:104  0 24.6T  0 raid0 
sdv             65:80   0  2.7T  0 disk  
└─md115          9:115  0  8.2T  0 raid5 
  └─md104        9:104  0 24.6T  0 raid0 
sdw             65:96   0  2.7T  0 disk  
└─md115          9:115  0  8.2T  0 raid5 
  └─md104        9:104  0 24.6T  0 raid0 
sdx             65:112  0  2.7T  0 disk  
└─md115          9:115  0  8.2T  0 raid5 
  └─md104        9:104  0 24.6T  0 raid0 

Initialy the names were only up until md28 (made simply linear md0 system disks, md1-19 raid5 volumes, md22-28 raid0 over raid 5 volumes)

I dont know why iam now getting each time different md naming and even the arrays are using different disks than how i configured them, above is example of how it changes and here i give example how i build this one RAID5 array, supposedly build

mdadm --create /dev/md6 -v --raid-devices=4 --level=5 /dev/sd[uvwx]

mdadm --create /dev/md23 -v --raid-devices=3 --level=0 /dev/md4 /dev/md5 /dev/md6

/waited for recover to finish

mkfs.ext4 -F /dev/md23

mkdir -p /mnt/md23

mount /dev/md23 /mnt/md23

df -h -x devtmpfs -x tmpfs

-here, before reboot i seen all perfectly fine and mounted, also with all other configured disks. Then i rebooted....

Not sure what is this happening here, may it be the controllers/linux are going mad and always renaming the disks sdXX naming, but the mdadm is trying to keep config the same regardless of it, maybe based on UUID or something?

Or should i somehow use UUID while building it if its possible or preferably better? This suggest it but in different context mdadm: Disk configuration by UUID I did not anticipated this, actually i was pretty sure that sdXX naming should be enough. But maybe not when controllers/expanders and jbod is in place :/

J B
  • 93
  • 9
  • I noticed in mdadm.conf there is only one hit, is this where should all others been? Why it wasnt saved automaticlly or something? Its nowhere hinted to also modify this file after array build. `# definitions of existing MD arrays ARRAY /dev/md/0 metadata=1.2 UUID=fc5d7da2:fe91ddfc:c1f16e1c:8e2e7bc3 name=debian:0` – J B Aug 14 '18 at 14:26
  • I also used this advice with no effect how-can-i-make-mdadm-auto-assemble-raid-after-each-boot https://superuser.com/questions/287462/how-can-i-make-mdadm-auto-assemble-raid-after-each-boot – J B Aug 14 '18 at 14:45

1 Answers1

0

So basically my answer appears to be in this thread i posted in comments https://superuser.com/questions/287462/how-can-i-make-mdadm-auto-assemble-raid-after-each-boot + update-initramfs -u But be careful i managed to create also md0 duplicate in mdadm.conf and been stuck in initramfs, where i deleted it.

BUT problem was still there until i did clear mdadm.conf and ran

root@debian:~# mdadm -Es >> /etc/mdadm/mdadm.conf
root@debian:~# update-initramfs -u -v
J B
  • 93
  • 9