0

Original fault:

pool: datastore7
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid.  Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: http://zfsonlinux.org/msg/ZFS-8000-4J
scan: resilvered 72K in 0 days 00:00:00 with 0 errors on Tue Feb  2 14:12:50 2021
config:

NAME                                   STATE     READ WRITE CKSUM
datastore7                             DEGRADED     0     0     0
raidz2-0                             DEGRADED     0     0     0
ata-ST12000NM001G-2MV103_ZLW1MN17  ONLINE       0     0     0
ata-ST12000NM001G-2MV103_ZLW1QTC1  ONLINE       0     0     0
ata-ST12000NM001G-2MV103_ZL26MYHM  ONLINE       0     0     0
ata-ST12000NM001G-2MV103_ZL269WB4  ONLINE       0     0     0
ata-ST12000NM001G-2MV103_ZLW1LH4W  ONLINE       0     0     0
ata-ST12000NM001G-2MV103_ZLW1NS9P  ONLINE       0     0     0
ata-ST12000NM001G-2MV103_ZLW1EZ91  ONLINE       0     0     0
ata-ST12000NM001G-2MV103_ZLW1M9F9  ONLINE       0     0     0
ata-ST12000NM001G-2MV103_ZLW1NHRW  ONLINE       0     0     0
ata-ST12000NM001G-2MV103_ZLW1MX0Z  ONLINE       0     0     0
11549522135666300014               FAULTED      0     0     0  was /dev/sdbe1
ata-ST12000NM001G-2MV103_ZLW1MVV5  ONLINE       0     0     0
ata-ST12000NM001G-2MV103_ZLW0C5E5  ONLINE       0     0     0
ata-ST12000NM001G-2MV103_ZL2791Y8  ONLINE       0     0     0
ata-ST12000NM001G-2MV103_ZLW1F68Y  ONLINE       0     0     0

I offlined the drive and replaced with a new one in the slot.

No my status looks like this..

  pool: datastore7
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
  scan: resilvered 0B in 0 days 00:00:00 with 0 errors on Tue Jun 29 14:49:39 2021
config:

        NAME                                   STATE     READ WRITE CKSUM
        datastore7                             DEGRADED     0     0     0
          raidz2-0                             DEGRADED     0     0     0
            ata-ST12000NM001G-2MV103_ZLW1MN17  ONLINE       0     0     0
            ata-ST12000NM001G-2MV103_ZLW1QTC1  ONLINE       0     0     0
            ata-ST12000NM001G-2MV103_ZL26MYHM  ONLINE       0     0     0
            ata-ST12000NM001G-2MV103_ZL269WB4  ONLINE       0     0     0
            ata-ST12000NM001G-2MV103_ZLW1LH4W  ONLINE       0     0     0
            ata-ST12000NM001G-2MV103_ZLW1NS9P  ONLINE       0     0     0
            ata-ST12000NM001G-2MV103_ZLW1EZ91  ONLINE       0     0     0
            ata-ST12000NM001G-2MV103_ZLW1M9F9  ONLINE       0     0     0
            ata-ST12000NM001G-2MV103_ZLW1NHRW  ONLINE       0     0     0
            13954818631282842372               OFFLINE      0     0     0  was /dev/disk/by-id/ata-ST12000NM001G-2MV103_ZLW1MX0Z-part1
            sdbe                               ONLINE       0     0     0
            ata-ST12000NM001G-2MV103_ZLW1MVV5  ONLINE       0     0     0
            ata-ST12000NM001G-2MV103_ZLW0C5E5  ONLINE       0     0     0
            ata-ST12000NM001G-2MV103_ZL2791Y8  ONLINE       0     0     0
            ata-ST12000NM001G-2MV103_ZLW1F68Y  ONLINE       0     0     0

All replace commands seem to give me errors:

-bash-4.2$ sudo zpool replace datastore7 13954818631282842372 sdbe
[sudo] password for ash.hill:
/dev/sdbe is in use and contains a unknown filesystem.

Would love some direction, very new to zfs. Luckily there is no data on this volume yet, but would still like to know how to recover.

Thanks!

  • 2
    Did you recycle a drive without wiping it first? This is the usual cause of this problem. – Michael Hampton Jun 29 '21 at 19:17
  • The replacement drive was brand new. So it should have been in an unformatted state. By wiping, do you mean formatting the new drive? to what FS? – Ashley Hill Jun 29 '21 at 22:40
  • 1
    I mean the usual definition, erasing the disk so that it is completely blank. I suspect something got written to your disk before you added it to the zpool. It could also be you added the _wrong disk_ and ZFS has just saved you from a real disaster. Try removing it and using the ID instead, like all the others. – Michael Hampton Jun 29 '21 at 22:45

0 Answers0