My upgrade to Karmic went well, I even got a message from the utility Palimpsest saying that one of my drives in my RAID1 had many bad sectors. I purchased a same sized drive from Newegg and replaced the one that was failing. I used Palimpsest to add the new drive to the RAID1 and it took quite awhile and it then said everything was fine.
sudo mdadm --misc -D /dev/md0
also said that both drives in the array were "active sync" so I felt pretty confidant that I had successfully rebuilt the RAID. When I looked at the drives with Gparted the first drive looked normal but the new supposedly successfully added to the array drive said its status was not mounted. So what more do I need to do to make return this RAID to normal operation, or is it there now?
Tried to reboot with the new drive only and crashed big time, so it isn't working, now just not sure how to fix it from here.
Tried to rebuild with terminal commands after manually setting up drive with Gparted. Same result.
/dev/md0: Version : 00.90
Creation Time : Sun Jan 18 05:54:48 2009
Raid Level : raid1
Array Size : 482520192 (460.17 GiB 494.10 GB)
Used Dev Size : 482520192 (460.17 GiB 494.10 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Sat Nov 7 13:46:53 2009
State : active, recovering
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Rebuild Status : 1% complete
UUID : f5ca3964:807ed60a:f652e973:155a9c45
Events : 0.1132371
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 8 16 1 active sync /dev/sdb