1

I'm attempting to build an array using mdadm. The array has 512-byte (not kibibyte) sized stripes. Because mdadm accepts arguments to chunk in kibibytes, not in bytes, I have been unable to build this array correctly.

mdadm --build -n2 -c512 -lraid0 /dev/md0 /dev/sdb /dev/sdc

Builds the array with chunk size 512 KiB.

mdadm --build -n2 -c.5 -lraid0 /dev/md0 /dev/sdb /dev/sdc

Returns an error.

Any ideas? I would normally just manually rebuild the RAID. But this one is 4TB big. I could rebuild it to an LVM, but I was hoping to avoid this. Any ideas?

  • 3
    Glancing through the mdadm source, it's hard to say. Internally, the size is represented as a int counting 512-byte sectors for a little while, but is checked to make sure it's at least 4KiB, then divided by 2 to convert to # KiB. On the actual `ioctl` side, there's apparently two versions, before md 0.90.0, it used a "chunk size factor" where 0=4KiB, 1=8KiB (2^x*4096) so it cannot be done on older kernels at all. Newer md uses # of bytes, but you'd have to look at the kernel to see if it supports 512 byte chunks, then patch mdadm to remove the checks and conversions that would stop it. – DerfK Jul 15 '11 at 23:19
  • Thank you so much for taking the time to sleuth this out. I checked the kernel source in the md driver and it would indeed be possible, because everything is referenced by number of sectors as far as I can tell. Thank you very much, I'll play with it and see what I come up with. If you copy this response as an answer I'll gladly accept. – OmnipotentEntity Jul 16 '11 at 01:42
  • @DerfK, b (thumb up) – poige Jul 16 '11 at 05:06

2 Answers2

1

I had the same problem and I solved it by means of a little Linux FUSE program I wrote. It's named xraid and I put it on Sourceforge.

For assembling your RAID:

  • Download and compile xraid
  • Run it:

    mkdir mnt ./xraid mnt 512 /dev/sdb /dev/sdc

  • You now should be able to access your RAID under mnt/xraid.
Guy
  • 26
  • 1
  • Thanks for this. I eventually broke down and used a handmade python script to access the data and copied it to another RAID with a sane block size, but if I have this issue in the future I'll be sure to take a look at it. – OmnipotentEntity Aug 21 '14 at 14:29
0

I had to "de-RAID-0" a raw array from a 2-disk enclosure today. This is the script I used to build a 2TB disk image file from 2x1TB drives with 1K (1024b) stripes. Modify as needed. If you have more than 2 RAID-0 disks, you'll need to duplicate and modify the line that advances output by 1 block for each added disk; the extra counter is included on that line already.

Formatted (can use ksh instead of Bash if preferred):

#!/bin/bash

A=0; C=0; S=0; ERR=0
STRIPE=1024  # Stripe size in bytes (not KiB)
FN=raid_recovery.bin  # Output file name

while [ $ERR -eq 0 ]
  do

  # Copy from first device
  dd if=/dev/sdc of=$FN bs=$STRIPE seek=$C skip=$A count=1 conv=notrunc 2>/dev/null || ERR=1

  # Advance output by 1 block; copy from second device
  # For 3+ disks, copy-paste this line for every added disk and modify 'if=' for each
  [ $ERR -eq 0 ] && C=$((C + 1)) && dd if=/dev/sdk of=$FN bs=$STRIPE seek=$C skip=$A count=1 conv=notrunc 2>/dev/null || ERR=1

  # Update source, dest, and status delay counters
  A=$((A + 1)); C=$((C + 1)); S=$((S + 1))

  # Display current output position periodically
  [ $S -ge 256 ] && S=0 && echo -n $'\r'"$C blocks done"

done

One-liner version:

A=0; C=0; S=0; ERR=0; STRIPE=1024; FN=raid_recovery.bin; while [ $ERR -eq 0 ]; do dd if=/dev/sdc of=$FN bs=$STRIPE seek=$C s
kip=$A count=1 conv=notrunc 2>/dev/null || ERR=1; [ $ERR -eq 0 ] && C=$((C + 1)) && dd if=/dev/sdk of=$FN bs=$STRIPE seek=$C skip=$A count=1 conv=notr
unc 2>/dev/null || ERR=1; A=$((A + 1)); C=$((C + 1)); S=$((S + 1)); [ $S -ge 256 ] && S=0 && echo -n $'\r'"$C blocks done"; done
Jody Bruchon
  • 155
  • 6