3

I am still novice in the world of storage. I am working on a project of migrating our backups infra from Solaris to Linux.

As part of that, I have rebuilt a server to Linux (RHEL7) and it has 2 root disks (300GB each) and 30 disks (of 3TB each) from 2 Xyratex shelves (24+6). One of the 2 Xyratex shelves (the one with 24 disks) has a multipath configured.

And as Linux kernel detects each shared drive once through each path, fdisk -l output is listing 102 3TB disks (instead of actual 30).

$ fdisk -l | grep '3000.6 GB' | wc -l
102

Out of which:

48 are multipath devices.

$ fdisk -l | grep '/dev/mapper/mpath*' | grep -v '3000.6' | wc -l
48

One entry for example is:

Disk /dev/mapper/mpathb: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors

And 30 are the actual disks (24+6):

Disk /dev/sdaa: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdab: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdac: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdad: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdaf: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdae: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdai: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdah: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdam: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdal: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdao: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdan: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdaj: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdav: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdaw: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdax: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdaz: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sday: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdba: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdbc: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdbb: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdaq: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdbd: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdap: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdag: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdau: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdas: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdar: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdak: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdat: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors

And 24 are redundant devices:

Disk /dev/sde: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdd: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdf: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdg: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdh: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdk: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdl: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdo: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdi: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdm: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sds: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdp: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdu: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdw: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdq: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdr: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdn: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdt: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdv: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdy: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdx: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdz: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdj: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors

Now, I am confused what’s the right way to configure the RAID.

I was planning to have below RAID configuration (reserve 6 disks as spares):

  • Create 3 RAID 6 groups with 8 disks each (6 disks and 2 spares).
  • 1 RAID 0 on top of the three RAID 6s - so basically there will all look as one unit with RAID 0.
  • Then create PV on top of the RAID 0 and then use LVM.

I was reading about having multipath in mdadm but it’s little unclear to me on how I should proceed further. Any thoughts on this?

techraf
  • 4,243
  • 8
  • 29
  • 44
Ram Kumar
  • 73
  • 1
  • 3
  • 8
  • 1
    I'm not sure how to set all this up off the top of my head, but I am about 99% confident that when you do, you'll need to configure it using the multipath device nodes in `/dev/mapper/mpath*`, and ensure that when you boot, your raid configuration happens after your multipath device configuration. You should not touch the individual `/dev/sd*` device nodes. – DerfK Sep 02 '16 at 20:54
  • Can you provide the output of `multipath -ll` and `cat /proc/partitions`? Also, can you briefly expand the overall architecture of your two Xyratexs? How do you connect to them? via a single or dual-controllers? On the linux host side, how many, and which, HBAs you have? In other words, as you're referring to "multipath", can you better present the "multipath" architecture you have, between the Linux host and the two storages? A simple schema (even scratched with a pencil and snapshotted with your mobile phone) would be great! – Damiano Verzulli Sep 02 '16 at 21:17
  • 1
    Is there any reason you're not using ZFS for this? – ewwhite Sep 03 '16 at 01:46

0 Answers0