1

I created a sofware raid-1 from existing data partition using this howto. Both disks are usb 1TB disks, (each had already two partitions and I used 2nd from each, they have same size)

So I simply repartitioned disk B with type fd and created the array

mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdf2

formatted it (reiserfs), mounted it, copied data on it.

I used the gnome device kit(?) (=Laufwerksverwaltung) to do some of it. Not sure if it made trouble I mixed it.

Put in mdadm.conf the line

ARRAY /dev/md0 level=raid1 num-devices=2 UUID=07e09d37:975bfef4:80073a9f:2aa04953

Added to fstab:

UUID=07e09d37-975b-fef4-8007-3a9f2aa04953 none  auto    nouser,noauto   0   0
/dev/md0    /media/md0  reiserfs    defaults    0   0

I enabled it somehow and it started rebuilding. It took long time (many small files).

I mounted it and checked contents. Then I wanted to test removing a drive. Unmounted array of course and switched of a disk, mounted array again, contents okay.

Not sure if something wrong happened; at least I tested rebooting and somehow the rebuilding started again.

And after finally having all rebuilt the array was still degraded. So I decided to stop it via the device manager and run the check. After reactivation it started again to rebuild.

What's the problem here? Help me understand the processes in software raid.

Here some more info:

root@grooverunner:~# mdadm --detail /dev/md0
/dev/md0:
        Version : 00.90
  Creation Time : Sat Apr 30 00:19:23 2011
     Raid Level : raid1
     Array Size : 452462592 (431.50 GiB 463.32 GB)
  Used Dev Size : 452462592 (431.50 GiB 463.32 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Sat Apr 30 21:16:15 2011
          State : clean, degraded, recovering
 Active Devices : 1
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 1

 Rebuild Status : 5% complete

           UUID : 07e09d37:975bfef4:80073a9f:2aa04953 (local to host grooverunner)
         Events : 0.1198

    Number   Major   Minor   RaidDevice State
       2       8       50        0      spare rebuilding   /dev/sdd2
       1       8       66        1      active sync   /dev/sde2

My related questions are:

  1. Does rebuilding always copy everything again?
  2. What's opposite of mdadm --assemble ?
groovehunter
  • 243
  • 2
  • 7

1 Answers1

1

Whenever an md raid goes into degraded state, it will require a rebuild. A rebuild will always re-sync the entire disk.

Did the rebuild from your drive removal test finish before you rebooted? And what did it say when it was "still degraded" after the rebuild? If it's not making it out of degraded state when a rebuild finished, then that's your real issue. Wait for the rebuild to finish, then check the output of mdadm --detail or cat /proc/mdstat.

mdadm --stop is the opposite of assemble.

Shane Madden
  • 114,520
  • 13
  • 181
  • 251