3

We have a pair of Sunfire v20z's (WITH LSI MPT raid) that have each had one drive in their mirrors die.

bash-2.05# cat /etc/release
                        Solaris 9 4/04 s9x_u6wos_08a x86
       Copyright 2004 Sun Microsystems, Inc.  All Rights Reserved.
                    Use is subject to license terms.
                         Assembled 22 March 2004

So that's the machine type established. Let's take a look at our array health:

# raidctl 
RAID Volume RAID RAID Disk 
Volume Type Status Disk Status 
------------------------------------------------------ 
c1t0d0 IM DEGRADED c1t0d0 OK 
                                c1t1d0 FAILED

Ok simple enough. Since these machines aren't hot swap, I powered one off and replaced the drive. When booting the machine it spits out some warning that the array is degraded and the kernel messages show this after the system boots:

May 13 15:21:54 ns-2.vancouver.ipapp.com scsi: [ID 365881 kern.info] /pci@0,0/pci1022,7450@a/pci17c2,10@4 (mpt0):
May 13 15:21:54 ns-2.vancouver.ipapp.com        Rev. 8 LSI, Inc. 1030 found.
May 13 15:21:54 ns-2.vancouver.ipapp.com scsi: [ID 365881 kern.info]         /pci@0,0/pci1022,7450@a/pci17c2,10@4 (mpt0):
May 13 15:21:54 ns-2.vancouver.ipapp.com        mpt0 supports power management.
May 13 15:21:54 ns-2.vancouver.ipapp.com pcplusmp: [ID 637496 kern.info] pcplusmp:     pci1000,30 (mpt) instance 0 vector 0x1b ioapic 0x3 intin 0x3 is bound to cpu 0
May 13 15:22:02 ns-2.vancouver.ipapp.com scsi: [ID 365881 kern.info]     /pci@0,0/pci1022,7450@a/pci17c2,10@4 (mpt0):
May 13 15:22:02 ns-2.vancouver.ipapp.com        mpt0 Firmware version v1.3.27.0 (IM/IME)
May 13 15:22:02 ns-2.vancouver.ipapp.com scsi: [ID 365881 kern.info] /pci@0,0/pci1022,7450@a/pci17c2,10@4 (mpt0):
May 13 15:22:02 ns-2.vancouver.ipapp.com        mpt0: IOC Operational.
May 13 15:22:13 ns-2.vancouver.ipapp.com scsi: [ID 107833 kern.warning] WARNING: /pci@0,0/pci1022,7450@a/pci17c2,10@4 (mpt0):
May 13 15:22:13 ns-2.vancouver.ipapp.com        Volume 0 is degraded

So it looks like everything is p-normal so far. BUT now when I run raidctl the array is GONE!

# raidctl
No RAID volumes found

Oh both disks are visible to the system

# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
   0. c1t0d0 <DEFAULT cyl 34999 alt 2 hd 16 sec 128>
      /pci@0,0/pci1022,7450@a/pci17c2,10@4/sd@0,0
   1. c1t1d0 <DEFAULT cyl 34999 alt 2 hd 16 sec 128>
      /pci@0,0/pci1022,7450@a/pci17c2,10@4/sd@1,0

The system knows the volume exists AND it's status but I cannot add the second drive to the mirror and fix the degraded array. Unix junkies, please help!

HopelessN00b
  • 53,795
  • 33
  • 135
  • 209
davidsmind
  • 31
  • 1

0 Answers0