0

I have a Gentoo-based server with a ZFS pool. Two drives in the same vdev failed. The two spares kicked in automatically to replace them. During this time, the two spares themselves failed.

I detached both spares and that did not detach them. I had to zfs remove them. I added a new spare. ZFS accepted it, but took no action to replace either failed disk. When I tried to manually replace one of the failed disks, it complained falsely with

cannot replace...already in replacing/spare config; wait for completion or use 'zpool detach'

When I tried to detach one of the failed disks as it suggested, it complained with

...only applicable to mirror and replacing vdevs

I ran zpool clear and that caused ZFS to resilver one of failed drives (presumably it was a hiccup and not a complete failure for this drive).

However, the previous complaints are still present. I can’t seem to get ZFS to replace the second entirely failed drive at all:

raidz2-5                                      DEGRADED     0     0     0
        ata-WDC_WD2002FYPS-01U1B1_WD-WCAVY3045198   ONLINE       0     0     0
        ata-WDC_WD2002FYPS-01U1B1_WD-WCAVY2860617   ONLINE       0     0     0
        ata-WDC_WD2002FYPS-01U1B1_WD-WCAVY3086676   ONLINE       0     0     0
        ata-WDC_WD2002FYPS-01U1B1_WD-WCAVY3048370   UNAVAIL      0     0     0
        ata-Hitachi_HDS723020BLA642_MN5220F30H9WPF  ONLINE       0     0     0
        ata-WDC_WD2002FYPS-02W3B0_WD-WCAVY5311391   ONLINE       0     0     0

I have also had this error message when trying to force replace the unavail drive with the spare.

...is in use and contains a unknown filesystem.

Some answers online suggested adding the ashift, which is 12 for this pool, to the replace command but that didn’t help.

mauricev
  • 71
  • 1
  • 2
  • 4

0 Answers0