I have two 500GB hard drives in my server. Currently, there is no RAID configured, but CentOS6 is installed (as a bare install for now). Is it possible to set up a software RAID-1 so that both drives are mirrored without the need to reinstall the OS? I have access to KVM for emergencies, but I'd prefer to be able to do it all through SSH.
-
If this is a professional situation versus a learning opportunity or curiosity, I'd just reinstall and configure your software RAID correctly. – ewwhite Nov 09 '13 at 17:19
1 Answers
You can do this (which I never tried myself, so test your KVM access first!):
First device to mirror either entire disks (/dev/sdX) or just partitions. Even if you only mirror a single full disk partition. In the examples below I assumed whole disk mirror.
mdadm --create /dev/md0 -n 2 -l 1 missing /dev/sdb
This creates a RAID-1 mirror with one disk missing.
Copy all the data from your first disk to the array.
Rsync might be useful for this. Exclude /proc and /dev in your copy.
Partitions might need to be created. There is not enough information in your post to indicate if this is the case or not.
Set up a proper boot on the new MD device. Keep an option in grub2 (or what your boot manager is) to fall back to the old disk in case it does not work.
Reboot to the new disk. Ignore the degraded array state warning.
Add the old disk to the array mdadm --manage /dev/md0 -a /dev/sda
and let it synchronise.
[Edit] Please do report back on this if you needed anything extra. E.g. a modprobe raid1
is CentOS did not come with the mirror module loaded by default. As written at the beginning of the answer: UNTESTED).
Actually tested stuff:
Step 1: Create the array
I created a VM with two 5 GB disks in Vmware workstation. I downloaded the CentOS 6.4 ISO and installed it on the first disk using a single partition. Maybe not the best way to partition a disk, but this is for a test only.
As you can see there is only one disk in use after booting:
[root@centOS-RAID-test etc]# mount /dev/sda1 on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0") none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) [root@centOS-RAID-test etc]# cat fstab # # /etc/fstab # Created by anaconda on Sun Nov 10 01:19:26 2013 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # # UUID=ccb58393-d92e-473f-ae8d-7b2d7231dae8 / ext4 defaults 1 1 /dev/sda1 / ext4 defaults 1 1 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0
I made one change from the default and that is from the UUID based mount-point to /dev/sda1. I did this since I think it is easier to identify the disks this way. I rebooted after the change to make sure I did not somehow broke the system.
Next lets use mdadm.
curl ftp.pbone.net/mirror/ftp.centos.org/6.4/os/x86_64/Packages/mdadm-3.2.5-4.el6.x86_64.rpm > file % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 333k 100 333k 0 0 339k 0 --:--:-- --:--:-- --:--:-- 1044k [root@centOS-RAID-test ~]# rpm -ivh file warning: file: Header V3 RSA/SHA1 Signature, key ID c105b9de: NOKEY Preparing... ########################################### [100%] 1:mdadm ########################################### [100%] [root@centOS-RAID-test ~]# mdadm Usage: mdadm --help for help
Ok, the mdadm command seems to be present and in my path. No need to rehash.
[root@centOS-RAID-test ~]# mdadm --create /dev/md0 -n 2 -l 1 missing /dev/sdb mdadm: /dev/sdb appears to be part of a raid array: level=raid0 devices=0 ctime=Thu Jan 1 01:00:00 1970 mdadm: partition table exists on /dev/sdb but will be lost or meaningless after creating array mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. [root@centOS-RAID-test ~]# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb[1] 5238720 blocks super 1.2 [2/1] [_U]
I am not sure why it thinks that this unused disk was part of a previous array, but the new md device successfully gets created without any 'Device or Resource busy' error.
Notice the name: MD0. When I rebooted this changed to md127.
To keep this consistent create /etc/mdadm.conf. I used
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 devices=/dev/sdb
as content. Some googling shows that this can be automated with mdadm --verbose --detail --scan > /etc/mdadm.conf
.
Stopped updating since it is almost 3AM. Will continue tomorrow

- 4,842
- 1
- 19
- 29
-
Given this a try, but I'm getting 'Device or Resource busy' when running the mdadm create command. This makes sense since the /dev/sda has mounted partitions (/boot and /). Is there a workaround for this? – Alex Blundell Nov 09 '13 at 00:32
-
Use the `missing` part for the disk which is already active and only used the empty unmounted second disk to initially create the array. In your case that might be /dev/sdb. – Hennes Nov 09 '13 at 00:34
-
Hmm.. I'm getting the same result for both devices. /dev/sdb is definitely the unmounted extra disk, so I've put that instead of sda, but I'm still getting device busy. It's not mounted either (I've checked /proc/mounts) – Alex Blundell Nov 09 '13 at 00:39
-