1

I just set up one of our servers with Centos 6.7. It's a machine that hasn't been in use for at least a year. I had to set it up a couple of times but every time one of the problems was with the storage RAID on the machine. It's set up as RAID10 but Centos seems to have some problem with it.

I have very little experience dealing with RAIDS on Linux systems and I'm completely stuck.

Here's the output from /var/log/messages that I believe to be the relevant parts.

I can execute any commands that you need to get further information.

ata8: port disabled. ignoring.
ata7.00: ATAPI: SONY    DVD RW AW-Q170A, 1.73, max UDMA/66
ata7.00: configured for UDMA/66
scsi 6:0:0:0: CD-ROM            SONY     DVD RW AW-Q170A  1.73 PQ: 0 ANSI: 5
sd 0:0:0:0: [sda] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB)
sd 0:0:0:0: [sda] 4096-byte physical blocks
sd 1:0:0:0: [sdb] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB)
sd 1:0:0:0: [sdb] 4096-byte physical blocks
sd 1:0:0:0: [sdb] Write Protect is off
sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00
sd 2:0:0:0: [sdc] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
sd 2:0:0:0: [sdc] 4096-byte physical blocks
sd 1:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
sd 2:0:0:0: [sdc] Write Protect is off
sd 2:0:0:0: [sdc] Mode Sense: 00 3a 00 00
sd 2:0:0:0: [sdc] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
 sdc:
sd 0:0:0:0: [sda] Write Protect is off
sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
sd 3:0:0:0: [sdd] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
sd 3:0:0:0: [sdd] Write Protect is off
sd 3:0:0:0: [sdd] Mode Sense: 00 3a 00 00
sd 3:0:0:0: [sdd] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
 sdb:
 sda:
 sdd:
sd 4:0:0:0: [sde] 976773168 512-byte logical blocks: (500 GB/465 GiB)
sd 4:0:0:0: [sde] 4096-byte physical blocks
sd 4:0:0:0: [sde] Write Protect is off
sd 4:0:0:0: [sde] Mode Sense: 00 3a 00 00
sd 4:0:0:0: [sde] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
sd 5:0:0:0: [sdf] 976773168 512-byte logical blocks: (500 GB/465 GiB)
sd 5:0:0:0: [sdf] Write Protect is off
sd 5:0:0:0: [sdf] Mode Sense: 00 3a 00 00
sd 5:0:0:0: [sdf] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
 sde:
 sdf: sde1 sde2
sd 4:0:0:0: [sde] Attached SCSI disk
 unknown partition table
sd 3:0:0:0: [sdd] Attached SCSI disk
 unknown partition table
sd 2:0:0:0: [sdc] Attached SCSI disk
 unknown partition table
sd 0:0:0:0: [sda] Attached SCSI disk
 unknown partition table
sd 1:0:0:0: [sdb] Attached SCSI disk
 sdf1 sdf2
sd 5:0:0:0: [sdf] Attached SCSI disk
sr0: scsi3-mmc drive: 48x/48x writer cd/rw xa/form2 cdda tray
Uniform CD-ROM driver Revision: 3.20
sr 6:0:0:0: Attached scsi CD-ROM sr0
dracut: Scanning for dmraid devices ddf1_BOOT
dracut: Found dmraid sets:
dracut: ddf1_BOOT ddf1_STORAGE
dracut: Activating ddf1_BOOT
dracut: ERROR: ddf1: wrong # of devices in RAID set "ddf1_STORAGE" [1/2] on /dev/sdd
dracut: ERROR: ddf1: wrong # of devices in RAID set "ddf1_STORAGE" [1/2] on /dev/sda
dracut: ERROR: ddf1: wrong # of devices in RAID set "ddf1_STORAGE" [1/2] on /dev/sdc
dracut: ERROR: ddf1: wrong # of devices in RAID set "ddf1_STORAGE" [1/2] on /dev/sdb
device-mapper: ioctl: device doesn't appear to be in the dev hash table.
dracut: RAID set "ddf1_BOOT" was activated
dracut: RAID set "ddf1_BOOT" was not activated
dracut: Scanning for dmraid devices ddf1_BOOT
dracut: Found dmraid sets:
dracut: ddf1_BOOT ddf1_STORAGE
dracut: Activating ddf1_BOOT
dracut: ERROR: ddf1: wrong # of devices in RAID set "ddf1_STORAGE" [1/2] on /dev/sdd
dracut: ERROR: ddf1: wrong # of devices in RAID set "ddf1_STORAGE" [1/2] on /dev/sda
dracut: ERROR: ddf1: wrong # of devices in RAID set "ddf1_STORAGE" [1/2] on /dev/sdc
dracut: ERROR: ddf1: wrong # of devices in RAID set "ddf1_STORAGE" [1/2] on /dev/sdb
device-mapper: ioctl: device doesn't appear to be in the dev hash table.
dracut: RAID set "ddf1_BOOT" already active
dracut: RAID set "ddf1_BOOT" was not activated
dracut: Scanning devices dm-2  for LVM logical volumes vg_backup2/lv_swap vg_backup2/lv_root
dracut: inactive '/dev/vg_backup2/lv_var' [375.14 GiB] inherit
dracut: inactive '/dev/vg_backup2/lv_root' [20.00 GiB] inherit
dracut: inactive '/dev/vg_backup2/lv_home' [20.00 GiB] inherit
dracut: inactive '/dev/vg_backup2/lv_swap' [30.00 GiB] inherit
dracut: inactive '/dev/vg_backup2/lv_log' [20.00 GiB] inherit
EXT4-fs (dm-3): mounted filesystem with ordered data mode. Opts:

Anyone here able/willing to assist?

EDIT: It's a HP ProLiant ML150 G3 and here is a link to the full log.

It has 6 hard drives, the two 500GB drives are set up as RAID 1 for the operating system.

[0:0:0:0]    disk    ATA      ST2000DM001-1ER1 CC25  /dev/sda 
[1:0:0:0]    disk    ATA      ST2000DM001-1ER1 CC25  /dev/sdb 
[2:0:0:0]    disk    ATA      ST1000DM003-1CH1 CC49  /dev/sdc 
[3:0:0:0]    disk    ATA      WDC WD10EALX-009 15.0  /dev/sdd 
[4:0:0:0]    disk    ATA      ST500DM002-1BD14 KC45  /dev/sde 
[5:0:0:0]    disk    ATA      WDC WD5003ABYX-0 01.0  /dev/sdf

EDIT 2: dmesg output

EDIT 3: I tried deleting the RAID completely and then recreating it without any results. The exact same thing happens.

grimurd
  • 71
  • 5
  • What does `cat /proc/mdstat` on the running system say? – womble Oct 28 '15 at 09:34
  • @womble Personalities : unused devices: – grimurd Oct 28 '15 at 09:38
  • You skipped over some relevant messages immediately prior to this. You also should describe the hardware in use. – Michael Hampton Oct 28 '15 at 12:27
  • @MichaelHampton I edited my original question with the information you requested. – grimurd Oct 28 '15 at 13:11
  • 3
    It looks like your RAID controller has somehow gotten unconfigured and gone into a JBOD mode. I would look at that first. – Michael Hampton Oct 28 '15 at 13:16
  • @MichaelHampton When I boot into the raid controller it tells me that the raid is fine. I even verified it yesterday and it was successfull. Can it be something happening during the boot process that makes it act this way? – grimurd Oct 28 '15 at 13:37
  • It could just be your RAID controller does not like the differing drives you have 6 different drives you have 5 different drives and only matching drivers are used in the OS RAID 1, the fact that the raid controller is working for the RAID 1 on the OS drive and the other is not is what seems strange, but RAID should be done before OS so it should be the Same on Linux as Windows, unless your using software raid and if that's the case please update your tags to include software-raid – Martin Barker Nov 02 '15 at 17:07
  • It's a hardware raid. I've tagged the question to indicate that's the case. – grimurd Nov 02 '15 at 19:46
  • what kind of drives you have pls? SAS/SATA/ATA/PATA? – ostendali Nov 03 '15 at 15:30
  • 1
    Can you try booting with `nodmraid` in your kernel command line under grub? It looks like you might have some leftover dmraid metadata on the drives that is causing you issues completely separate from your hardware raid. – Kassandry Nov 06 '15 at 00:26
  • @Kassandry booting with nomdraid worked like a charm. Thanks everyone for your suggestions. – grimurd Nov 12 '15 at 16:36

0 Answers0