1

On a system where I have been running OpenSuSE11.1 for some time I performed a fresh installation of SLES11 SP1. The system used a software RAID5 system on top of which an LVM was setup with a single partition of about 2.5 TB size for a mount like /data.

The problem is that SLES11.1 does not recognize the software RAID so that I cannot mount the LVM anymore.

Here is the output of vgdisplay and pvdisplay:

$ vgdisplay
--- Volume group ---
VG Name               vg001
System ID
Format                lvm2
Metadata Areas        1
Metadata Sequence No  2
VG Access             read/write
VG Status             resizable
MAX LV                0
Cur LV                1
Open LV               0
Max PV                0
Cur PV                1
Act PV                1 
VG Size               2.73 TB
PE Size               4.00 MB
Total PE              715402
Alloc PE / Size       715402 / 2.73 TB
Free  PE / Size       0 / 0
VG UUID               Aryj93-QgpG-8V1S-qGV7-gvFk-GKtc-OTmuFk

$ pvdisplay
--- Physical volume ---
PV Name               /dev/md0
VG Name               vg001
PV Size               2.73 TB / not usable 896.00 KB
Allocatable           yes (but full)
PE Size (KByte)       4096
Total PE              715402
Free PE               0
Allocated PE          715402
PV UUID               rIpmyi-GmB9-oybx-pwJr-50YZ-GQgQ-myGjhi

The PE information suggests the size of the volume is recognized but the physical volume is not accessible.

The software RAID seems to be running ok, it is assemble from the following mdadm.conf, followed by the output of mdadm diagnostics of the md0 device and the devices used by the assemply:

$ cat /etc/mdadm.conf
DEVICE partitions
ARRAY /dev/md0  auto=no level=raid5 num-devices=4 UUID=a0340426:324f0a4f:2ce7399e:ae4fabd0

$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdb[0] sde[3] sdd[2] sdc[1]
  2930287488 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

unused devices: <none>

$ mdadm --detail /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Tue Oct 27 13:04:40 2009
     Raid Level : raid5
     Array Size : 2930287488 (2794.54 GiB 3000.61 GB)
  Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Mon Feb 27 14:55:46 2012
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : a0340426:324f0a4f:2ce7399e:ae4fabd0
         Events : 0.20

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       3       8       64        3      active sync   /dev/sde


$ mdadm --examine /dev/sdb
/dev/sdb:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : a0340426:324f0a4f:2ce7399e:ae4fabd0
  Creation Time : Tue Oct 27 13:04:40 2009
     Raid Level : raid5
  Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
     Array Size : 2930287488 (2794.54 GiB 3000.61 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0

    Update Time : Mon Feb 27 14:55:46 2012
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 2b5182e8 - correct
         Events : 20

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     0       8       16        0      active sync   /dev/sdb

   0     0       8       16        0      active sync   /dev/sdb
   1     1       8       32        1      active sync   /dev/sdc
   2     2       8       48        2      active sync   /dev/sdd
   3     3       8       64        3      active sync   /dev/sde

$ mdadm --examine /dev/sdc
/dev/sdc:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : a0340426:324f0a4f:2ce7399e:ae4fabd0
  Creation Time : Tue Oct 27 13:04:40 2009
     Raid Level : raid5
  Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
     Array Size : 2930287488 (2794.54 GiB 3000.61 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0

    Update Time : Mon Feb 27 14:55:46 2012
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 2b5182fa - correct
         Events : 20

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     1       8       32        1      active sync   /dev/sdc

   0     0       8       16        0      active sync   /dev/sdb
   1     1       8       32        1      active sync   /dev/sdc
   2     2       8       48        2      active sync   /dev/sdd
   3     3       8       64        3      active sync   /dev/sde

$ mdadm --examine /dev/sdd
/dev/sdd:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : a0340426:324f0a4f:2ce7399e:ae4fabd0
  Creation Time : Tue Oct 27 13:04:40 2009
     Raid Level : raid5
  Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
     Array Size : 2930287488 (2794.54 GiB 3000.61 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0

    Update Time : Mon Feb 27 14:55:46 2012
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 2b51830c - correct
         Events : 20

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     2       8       48        2      active sync   /dev/sdd

   0     0       8       16        0      active sync   /dev/sdb
   1     1       8       32        1      active sync   /dev/sdc
   2     2       8       48        2      active sync   /dev/sdd
   3     3       8       64        3      active sync   /dev/sde

$ mdadm --examine /dev/sde
/dev/sde:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : a0340426:324f0a4f:2ce7399e:ae4fabd0
  Creation Time : Tue Oct 27 13:04:40 2009
     Raid Level : raid5
  Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
     Array Size : 2930287488 (2794.54 GiB 3000.61 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0

    Update Time : Mon Feb 27 14:55:46 2012
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 2b51831e - correct
         Events : 20

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     3       8       64        3      active sync   /dev/sde

   0     0       8       16        0      active sync   /dev/sdb
   1     1       8       32        1      active sync   /dev/sdc
   2     2       8       48        2      active sync   /dev/sdd
   3     3       8       64        3      active sync   /dev/sde

The only suspicion I have is an /dev/md0p1 partition which is created automatically after boot - this does not look right. It seems to be treated as another software RAID device, though to me it looks like this is the parity area of the md0 RAID device:

$ mdadm --detail /dev/md0p1
/dev/md0p1:
        Version : 0.90
  Creation Time : Tue Oct 27 13:04:40 2009
     Raid Level : raid5
     Array Size : 976752000 (931.50 GiB 1000.19 GB)
  Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Mon Feb 27 15:43:00 2012
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : a0340426:324f0a4f:2ce7399e:ae4fabd0
         Events : 0.20

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       3       8       64        3      active sync   /dev/sde

In the partitioner tool from SLES's YaST administration, the software RAID device is listed under the general hard disk section and not under the RAID section. The md0p1 partition is listed in the partition table of the md0 disk.

Is the software RAID disk is not recognized correctly by the operating system? Or is it rather an issue with the LVM configuration?

Any ideas how to solve this issue?

Bernhard
  • 121
  • 7

0 Answers0