2

My home Linux server has a four drive software RAID 5, where each SATA drive has two partitions, named sd[abcd][12]. /dev/md1 is built up from sd[abcd]1, /dev/md2 is built up from sd[abcd]2. Each drive is 500GB, and each partition is 250GB. I use LVM to merge md1 and md2 into a single volume group.

What would be a good procedure to upgrade these drives to 1TB each? I have no more available SATA ports. I've considered pulling one of the drives out, replacing it with a new 1TB drive partitioned with two 250GB partitions, and a 500GB partition, and rebuilding the array. Repeat for each drive, then create a new RAID5 on sd[abcd]3. That seems "less than optimal": abusing the sync/recovery process seems to not be the right way to do this.

Does it make more sense to use an external USB enclosure, stick in a new 1TB disk, partition it, add it to the md1 and md2 arrays, resync, then remove one of the old disks from the array, repeating that for each disk?

There's really no good reason to have multiple arrays, spread across the same disks, so if the process eliminated that, that would be good.

Thanks for your suggestions!

joev
  • 206
  • 2
  • 5

4 Answers4

4

First of all, partitioning the drives into several RAID arrays and merging them back with LVM makes very little sense.

USB is incredibly slow and CPU intensive, so attaching four drives in sequence and copying that much data will take ages. I would much rather just swap the drives one at a time.

My recipe for this would be (If your data is very important, and you are very paranoid, you can add a USB drive as a hot spare during this operation, but it will take a lot more time):

  1. Pull out one of your old drives and insert a new one. Wait for it to sync
  2. Repeat for the rest of the drives
  3. You now have four 1TB drives with 500GB allocated as two 250GB partitions, and 500GB unused space
  4. Expand one of the existing partitions from 250GB to 750GB so that you're using the entire disk
  5. Grow the raid containing the expanded partition using mdadm grow
  6. Expand your LVM so that the entire size is in use
  7. Remove your 250GBx4 array pv from LVM. This will move your data off the small partitions in over to the large partitions, so it will take some time
  8. Delete the 250GB partitions and repeat the growing process for the 750GB partitions

After this, you'll have four 1TB disks with one 1TB partition each, joined in a RAID5 with LVM on top. Perfect result.

1

I recomment doing such work in a sandbox.
I wrote a longer article on how to setup such an sandbox with files and not partitions: Can I "atomically" swap a raid5 drive in Linux software raid?

ThorstenS
  • 3,122
  • 19
  • 21
1

You could do the resync-and-replace dance as you suggest, but once you're done copy all the files from md1 & 2 onto md3, then nuke the md1 & 2 partitions and expand the md3 partition to use the new space. To do that you'll want to make the first partition on the TB drives be part of md3 so you don't have to relocate the data later.

pjz
  • 10,595
  • 1
  • 32
  • 40
0

If your current HD usage less then 1TB then I would be tempted to do the following:

  • pull one of the 500GB drives out, and install a 1TB
  • copy everything to the 1TB drive
  • install the other 1TB drives
  • create a new degraded RAID5 + lvm and partition the way you want it with the other disks
  • copy your data from the stand-alone 1tb drive to the degraded raid5
  • add the stand-alone drive to the array.
Zoredache
  • 130,897
  • 41
  • 276
  • 420
  • So what where the downvotes for? The procedure should work as long as he isn't using more then 1TB of the 1.5TB space that he would have with a 4x500GB raid5. – Zoredache May 04 '09 at 22:49