0

My upgrade to Karmic went well, I even got a message from the utility Palimpsest saying that one of my drives in my RAID1 had many bad sectors. I purchased a same sized drive from Newegg and replaced the one that was failing. I used Palimpsest to add the new drive to the RAID1 and it took quite awhile and it then said everything was fine.

sudo mdadm --misc -D /dev/md0 also said that both drives in the array were "active sync" so I felt pretty confidant that I had successfully rebuilt the RAID. When I looked at the drives with Gparted the first drive looked normal but the new supposedly successfully added to the array drive said its status was not mounted. So what more do I need to do to make return this RAID to normal operation, or is it there now?

Tried to reboot with the new drive only and crashed big time, so it isn't working, now just not sure how to fix it from here.

Tried to rebuild with terminal commands after manually setting up drive with Gparted. Same result.

/dev/md0: Version : 00.90

  Creation Time : Sun Jan 18 05:54:48 2009
     Raid Level : raid1
     Array Size : 482520192 (460.17 GiB 494.10 GB)
  Used Dev Size : 482520192 (460.17 GiB 494.10 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 0
    Persistence : Superblock is persistent
    Update Time : Sat Nov  7 13:46:53 2009
          State : active, recovering
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
 Rebuild Status : 1% complete
           UUID : f5ca3964:807ed60a:f652e973:155a9c45
         Events : 0.1132371

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       8       16        1      active sync   /dev/sdb
Froggiz
  • 3,043
  • 1
  • 19
  • 30

1 Answers1

1

I haven't used Palimpsest, but normally you would have to recreate the partition tables.

To copy the partitions, use:

sfdisk -d /dev/sda | sfdisk /dev/sdb

(sda being your good disk, and sdb the new disk).

Then, you can add the new drive, or partitions to the array with

mdadm --manage /dev/md0 --add /dev/sdb1

It would be better if you could post more details about your raid setup. (/proc/mdstat would be good to start with)

OCC
  • 11
  • 1