1

I had my second HD in my RAID1 come up with bad sectors. So I got another drive and pulled out the bad sector drive and put the new drive in. With the original working RAID1 drive in the computer it failed to boot.

I manually copied everything from the old drive over via a Gparted Live CD. Still no booting.

Kind of scratching my head here as I can see that both of the drives have data on them but are unable to get either of them to boot. I used a Ubuntu live CD and couldn't even manually mount either of the drives, which I thought was really the odd part.

Not sure where to go from here.

peterh
  • 4,953
  • 13
  • 30
  • 44
  • If I turn on the swap spaces of both drives via the live CD the available swap space shows up as the total of the two drives. –  Dec 02 '09 at 01:10
  • Please elaborate on 'failed to boot'. Do you get a black screen, do you see the boot loader, do you get errors, do you get a message about missing operating system. – Zoredache Dec 02 '09 at 01:22
  • When booting up the system it holds up at "Verifying DMI Pool Data ........." forever. –  Dec 02 '09 at 04:50
  • how long did you wait at that point? – QuantumMechanic Nov 23 '11 at 17:39

2 Answers2

1

The drive that failed was probably the one that had the bootsector written to it. Try booting with the live CD, mount your assembled RAID 1 root partition under /mnt, your boot partition under /mnt/boot (if you have a separate boot partition) then run chroot /mnt grub-install hd0.

womble
  • 96,255
  • 29
  • 175
  • 230
  • Both the drives appear to have MBR partition tables on them, according to the info from Palimpsest as well both having the BOOT Flag displayed in Gparted. When I look at the RAID itself from a Live CD via Palimpsest both of the drives have a "-" in the State column, and I am unable to ADD drive to the RAID or create a new RAID. The RAID shows up as a Drive when looking at it in the browser but I was not able to get it to mount properly. –  Dec 02 '09 at 05:18
0

When I was initially making a RAID-1 on Ubuntu 9 a couple of years ago, when I was testing failures I ran into something like this:

  • I had a working 2-disk RAID-1 array
  • I powered off the machine and unplugged the drive cable from one drive
  • Powered up.

When I did this, the boot would hang (it's been two years so I can't remember exactly where). Eventually (at least 5 minutes, maybe 10 or 15 minutes) it would drop me into the initramfs shell. At that point I could run mdadm to get the array going and finish booting.

By contrast, if I did the following:

  • Working 2-disk RAID-1 array
  • With machine up, run mdadm to fail and remove a drive.
  • Power down, unplug that drive.
  • Powered up.

the system would boot fine. Turns out there was a "bug" (I put it in quotes because IIRC there was a lot of arguing about the pros and cons in bugzilla) that Ubuntu is by default in a mode where it will not auto-assemble a degraded array. And if your root partition lives on that array you can't boot (though eventually you'll be dumped into the initramfs shell).

QuantumMechanic
  • 655
  • 6
  • 15