0

Have (32-bit) Centos 5.6 file server with 2x1TB HDD/ext3 in MDADM RAID-1 as follows:

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
Device    Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   fd  Linux raid autodetect
/dev/sda2              14         144     1052257+  fd  Linux raid autodetect
/dev/sda3             145      121601   975603352+  fd  Linux raid autodetect

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
Device    Boot      Start         End      Blocks   Id  System
/dev/sdb1   *           1          13      104391   fd  Linux raid autodetect
/dev/sdb2              14         144     1052257+  fd  Linux raid autodetect
/dev/sdb3             145      121601   975603352+  fd  Linux raid autodetect

Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/md2             945048528 295908988 600359380  34% /
/dev/md0                101018     24028     71774  26% /boot
/dev/md1               1052160         -         -    - /swap

Installed (64-bit) Centos 6 on single 128GB SSD using LVM/ext4, which plan to use for everything except /home, using the 1TB RAID-1 here instead (not interested in keeping anything outside of /home from these drives longer-term).

Surprisingly, haven't found any examples for setting up a system with single drive for /boot etc and RAID-1 for /home, would have expected this to be more common as SSD price/capacity improves.

Seems like copying the essential data (in RAID /home) to an external back-up, reformat/repartition drives as new RAID /home under LVM and copy data back is an option, but is there a better way to do this "in place" especially as there is no pre-existing /home partition on either system?

Not sure if the lack of info for either option is simply due to not being a good idea? Would really appreciate some opinions or advice. Thank you.

womble
  • 96,255
  • 29
  • 175
  • 230

3 Answers3

1

The CentOS partitioner is probably too bodgy to support it as part of the installation, but in theory there's no reason why you couldn't tell it "assemble these two devices as an MD RAID-1, don't format it, and mount it as /home in the new system". The Debian Installer handles that just fine, but I've always had my struggles with Anaconda's idea of a good partitioning time.

Practically, I'd just leave the 1TB drives alone during install, and install everything onto the SSD. Then, once that's done and complete, configure the machine to assemble and mount the MD device under /home. The latter bit's easy, one line of fstab; how to explain to CentOS that it should be assembling a RAID device I'll leave as an exercise for the reader, because no doubt it's not simple or automatic.

Backups are important, in case you make a mess (or CentOS goes on a disk-wiping rampage), but there should be no reason why you should have to restore from backup unless catastrophe strikes.

womble
  • 96,255
  • 29
  • 175
  • 230
  • Thanks, clean install and leaving the RAID drives alone was the original plan, but not quite so straight forward as expected... Due in part to wanting to be able to roll back to the previous install if needed. Editing fstab by hand has foo'd the install and although I can fix it, I'll bite the bullet and do a clean LVM install on SSD and add the RAID as /home at the same time. Then restore from back-up. Long winded, but like previous upgrades, I'll know what I'm doing... – Kalimari Aug 23 '11 at 01:25
0

The best answer is to copy all of the data off and reinstall. You have filesystems directly on top of block devices (virtual or not, doesn't matter). LVM needs space at the start of the drive for the PV label and the VG/LV/PE metadata. Trying to set this up now will stomp on the superblock for your existing filesystems. Even if that wasn't an issue, you'd have to deal with repartitioning the drive and shifting the filesystem image around (or living with three different PVs). It's possible to shrink the filesystem down from taking up the entire volume, you'd then have to shift all of the bytes of the filesystem to make room for the LVM metadata. It's also possible to merge all of your partitions by shifting bytes around but it's crazy to do this without an external backup anyways.

My advice.

  • Make a complete backup
  • repartition both drives into two slices, a 256MB slice + everything else.
  • ignore sdX1 for now, that's just space in case you need to put /boot there someday
  • setup mdadm on sd[bc]2
  • make /dev/md0 an LV PV
  • add that PV to a VG
  • make a home LV large enough to hold your existing data (around 350GB) and leave the rest unallocated for future LVs or snapshots. Grow the home LV as needed and then online resize the filesystem.

With that setup you can also create LVs from /, swap /tmp, /var, etc. and run without a separate boot disk.

Joshua Hoblitt
  • 675
  • 4
  • 11
  • Thanks, confirms what I guess is the best option: back-up/restore. Originally planned to simply boot from single SSD, add mount point for the RAID drives. Having experimented with LVM a little, can see the benefits to using it across all drives. – Kalimari Aug 23 '11 at 01:06
-1

You basically just need to make your new drive bootable, you can find instructions on that here:

http://www.cyberciti.biz/faq/linux-create-a-bootable-usb-pen/ and some more details on the specific steps here: http://wiki.centos.org/HowTos/CentOS5ConvertToRAID

After that you just need to copy over whatever data you want where and setup your fstab so / is your SSD and /home is your RAID.

polynomial
  • 4,016
  • 14
  • 24
  • Thanks, SSD is capable of booting system already (should have been clearer on that point). Just need to figure out the best way to add existing /home data + RAID benefit back into the system. – Kalimari Aug 23 '11 at 00:53