3

I have a Raid-Z2 pool with 6 * 4TB drives. All the drives have a bit over 40 000 hours running time. Now they all seem to be degrading at the same time. The pool is degraded, and all drives are marked as degraded with to many errors. But luckily no lost data at the moment.

        NAME        STATE     READ WRITE CKSUM
        File        DEGRADED     0     0     0
          raidz2-0  DEGRADED     0     0     0
            sda     DEGRADED     0     0     0  too many errors
            sdb     DEGRADED     0     0     0  too many errors
            sdc     DEGRADED     0     0     0  too many errors
            sdd     DEGRADED     0     0     0  too many errors
            sde     DEGRADED     0     0     0  too many errors
            sdf     DEGRADED     0     0     0  too many errors

I would like to build a new pool with Raid-Z1 and 3 * 6TB drives as I don't need all the space as in the original pool. My problem is that the old pool has 6 drives and my pool will have 3, but my SAS controller only has 8 ports. So I would like to disconnect one disk from my Raid-Z2 pool, connect my 3 new drives and create a new pool with them and then save my data by copying to the new pool before the old pool fails.

Is that possible? My thinking is that the old pool should work with one disk missing. But when I tried disconnecting a disk I'm unable to access any data in the old pool.

Anyone know how to solve this?

Zpool status -v:

  pool: File
 state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://zfsonlinux.org/msg/ZFS-8000-9P
  scan: resilvered 6.82G in 0 days 00:04:00 with 0 errors on Sun Aug 23 21:21:15 2020
config:

        NAME        STATE     READ WRITE CKSUM
        File        DEGRADED     0     0     0
          raidz2-0  DEGRADED     0     0     0
            sda     DEGRADED     0     0     0  too many errors
            sdb     DEGRADED     0     0     0  too many errors
            sdc     DEGRADED     0     0     0  too many errors
            sdd     DEGRADED     0     0     0  too many errors
            sde     DEGRADED     0     0     0  too many errors
            sdf     DEGRADED     0     0     0  too many errors

errors: No known data errors

All disks report SMART status ok:

smartctl -H /dev/sda
smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.4.55-1-pve] (local build)
Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Health Status: OK


syslog seems empty:

root@boxvm:/var/log# cat syslog | grep sda
root@boxvm:/var/log#

dmesg output also seems fine:

dmesg | grep sda
[    8.997624] sd 1:0:0:0: [sda] Enabling DIF Type 2 protection
[    8.998488] sd 1:0:0:0: [sda] 7814037168 512-byte logical blocks: (4.00 TB/3.64 TiB)
[    8.998847] sd 1:0:0:0: [sda] Write Protect is off
[    8.998848] sd 1:0:0:0: [sda] Mode Sense: df 00 10 08
[    8.999540] sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA
[    9.093385]  sda: sda1 sda9
[    9.096819] sd 1:0:0:0: [sda] Attached SCSI disk


dmesg | grep sdb
[    8.997642] sd 1:0:1:0: [sdb] Enabling DIF Type 2 protection
[    8.998467] sd 1:0:1:0: [sdb] 7814037168 512-byte logical blocks: (4.00 TB/3.64 TiB)
[    8.998828] sd 1:0:1:0: [sdb] Write Protect is off
[    8.998830] sd 1:0:1:0: [sdb] Mode Sense: df 00 10 08
[    8.999524] sd 1:0:1:0: [sdb] Write cache: disabled, read cache: enabled, supports DPO and FUA
[    9.087056]  sdb: sdb1 sdb9
[    9.090465] sd 1:0:1:0: [sdb] Attached SCSI disk


dmesg | grep sdc
[    8.997812] sd 1:0:2:0: [sdc] Enabling DIF Type 2 protection
[    8.998639] sd 1:0:2:0: [sdc] 7814037168 512-byte logical blocks: (4.00 TB/3.64 TiB)
[    8.998998] sd 1:0:2:0: [sdc] Write Protect is off
[    8.998999] sd 1:0:2:0: [sdc] Mode Sense: df 00 10 08
[    8.999692] sd 1:0:2:0: [sdc] Write cache: disabled, read cache: enabled, supports DPO and FUA
[    9.084259]  sdc: sdc1 sdc9
[    9.088030] sd 1:0:2:0: [sdc] Attached SCSI disk


dmesg | grep sdd
[    8.997932] sd 1:0:3:0: [sdd] Enabling DIF Type 2 protection
[    8.998761] sd 1:0:3:0: [sdd] 7814037168 512-byte logical blocks: (4.00 TB/3.64 TiB)
[    8.999120] sd 1:0:3:0: [sdd] Write Protect is off
[    8.999121] sd 1:0:3:0: [sdd] Mode Sense: df 00 10 08
[    8.999818] sd 1:0:3:0: [sdd] Write cache: disabled, read cache: enabled, supports DPO and FUA
[    9.103840]  sdd: sdd1 sdd9
[    9.107482] sd 1:0:3:0: [sdd] Attached SCSI disk


dmesg | grep sde
[    8.998017] sd 1:0:4:0: [sde] Enabling DIF Type 2 protection
[    8.998839] sd 1:0:4:0: [sde] 7814037168 512-byte logical blocks: (4.00 TB/3.64 TiB)
[    8.999234] sd 1:0:4:0: [sde] Write Protect is off
[    8.999235] sd 1:0:4:0: [sde] Mode Sense: df 00 10 08
[    8.999933] sd 1:0:4:0: [sde] Write cache: disabled, read cache: enabled, supports DPO and FUA
[    9.088282]  sde: sde1 sde9
[    9.091665] sd 1:0:4:0: [sde] Attached SCSI disk


dmesg | grep sdf
[    8.998247] sd 1:0:5:0: [sdf] Enabling DIF Type 2 protection
[    8.999076] sd 1:0:5:0: [sdf] 7814037168 512-byte logical blocks: (4.00 TB/3.64 TiB)
[    8.999435] sd 1:0:5:0: [sdf] Write Protect is off
[    8.999436] sd 1:0:5:0: [sdf] Mode Sense: df 00 10 08
[    9.000136] sd 1:0:5:0: [sdf] Write cache: disabled, read cache: enabled, supports DPO and FUA
[    9.090609]  sdf: sdf1 sdf9
[    9.094235] sd 1:0:5:0: [sdf] Attached SCSI disk

dmesg for SAS controller

root@boxvm:/var/log# dmesg | grep mpt2
[    1.151805] mpt2sas_cm0: 64 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (65793672 kB)
[    1.200012] mpt2sas_cm0: CurrentHostPageSize is 0: Setting default host page size to 4k
[    1.200023] mpt2sas_cm0: MSI-X vectors supported: 1
[    1.200024] mpt2sas_cm0:  0 1
[    1.200098] mpt2sas_cm0: High IOPs queues : disabled
[    1.200099] mpt2sas0-msix0: PCI-MSI-X enabled: IRQ 51
[    1.200100] mpt2sas_cm0: iomem(0x00000000fc740000), mapped(0x00000000629d5dd1), size(65536)
[    1.200101] mpt2sas_cm0: ioport(0x000000000000d000), size(256)
[    1.254826] mpt2sas_cm0: CurrentHostPageSize is 0: Setting default host page size to 4k
[    1.281681] mpt2sas_cm0: scatter gather: sge_in_main_msg(1), sge_per_chain(9), sge_per_io(128), chains_per_io(15)
[    1.281746] mpt2sas_cm0: request pool(0x0000000074c49e3e) - dma(0xfcd700000): depth(3492), frame_size(128), pool_size(436 kB)
[    1.289333] mpt2sas_cm0: sense pool(0x00000000693be9f4)- dma(0xfcba00000): depth(3367),element_size(96), pool_size(315 kB)
[    1.289400] mpt2sas_cm0: config page(0x00000000f6926acf) - dma(0xfcb9ad000): size(512)
[    1.289401] mpt2sas_cm0: Allocated physical memory: size(1687 kB)
[    1.289401] mpt2sas_cm0: Current Controller Queue Depth(3364),Max Controller Queue Depth(3432)
[    1.289402] mpt2sas_cm0: Scatter Gather Elements per IO(128)
[    1.333780] mpt2sas_cm0: LSISAS2008: FWVersion(20.00.07.00), ChipRevision(0x03), BiosVersion(00.00.00.00)
[    1.333781] mpt2sas_cm0: Protocol=(Initiator,Target), Capabilities=(TLR,EEDP,Snapshot Buffer,Diag Trace Buffer,Task Set Full,NCQ)
[    1.334527] mpt2sas_cm0: sending port enable !!
[    2.861790] mpt2sas_cm0: host_add: handle(0x0001), sas_addr(0x590b11c0155b3300), phys(8)
[    8.996385] mpt2sas_cm0: port enable: SUCCESS

  • 1
    As already mentioned, this doesn't look like disks errors. What does zpool status show? When was the last scrub? – batistuta09 Aug 27 '20 at 17:30
  • @batistuta09 thank you for your interest. I added the output of zpool status above. Initially the error occurded as two drives with faulted status and four drives listed as ok. Now all drives are listed as degraded, with none with the fault status. – Northern Brewer Aug 28 '20 at 18:19
  • 1
    Are you just running into SATA bus timeouts? Your disks may not actually be failing. – ewwhite Aug 28 '20 at 19:38
  • @ewwhite I'm runnings SAS drives with a SAS HBA controller (LSI 9211-8i IT). Google tells me about many cases with STA bus timeouts, but not so many regarding SAS. Do you know if the same problem can happen with the SAS interface? – Northern Brewer Aug 29 '20 at 08:07
  • 1
    It's not as frequent on SAS drives, but some `dmesg` or system logs detailing the disk errors would be useful to analyze. – ewwhite Aug 29 '20 at 14:07
  • @ewwhite added dmesg output. I cant find any failures related to the disks, and the SMART healt is also reporting no failures. This makes me suspect the controller card? dmesg doesnt seem to list any errors relating to the card either – Northern Brewer Aug 30 '20 at 08:10

1 Answers1

1

If your pool is already failing, degrading it further is a really bad idea. If all of your disks are erroring out together, you most likely have a failing controller or a failing PSU, rather than failing disks.

You would do well to invest in an additional controller to hang your replacement disks off of as a first step.

Gordan Bobić
  • 971
  • 4
  • 11
  • Thank you for your quick reply. I'm limited on internal pcie slots, so my only option is to replace my current controller with a 16 port one. The way I understand ZFS is that it is controlled by the OS so replacing the controller should not affect my current pool? Or will the disks get new addresses with a new controller so the existing will pool stop working? – Northern Brewer Aug 24 '20 at 17:17
  • 1
    It shouldn't affect your current pool, other than the device nodes might change. You should still be able to do `zpool import -d /dev/disk/by-id/ poolname` and use it as is, unless your current controller is exposing them in a funny way, e.g. each disk being exposed as a RAID0 array. If what you currently have is an plain HBA, it should be fine. – Gordan Bobić Aug 24 '20 at 18:10