3

I accidentally added a disk to a zpool the wrong way in the process of replacing a bad disk. Now I can't remove the disk because it is listed as a device with no redundancy. How do I remove da2? Using zpool remove pdx-zfs-02 da2 doesn't work. It returns "cannot remove da2: only inactive hot spares, cache, top-level, or log devices can be removed". Zfs in freebsd/freenas doesn't allow the removal of devices that don't have redundancy. There are not force options. How can I work around this?

NAME                                            STATE     READ WRITE CKSUM
pdx-zfs-02                                      DEGRADED     0     0     0
  raidz2-0                                      DEGRADED     0     0     0
    gptid/c459110a-a73c-1a49-b12c-f03fbec6eca6  FAULTED    158 25.3K     0  too many errors
    gptid/8c87e988-7832-1e44-9c45-abe95ee2d8f7  ONLINE       0     0     0
    gptid/3b4be4d0-136e-41e3-c546-d5c4ba2b3142  ONLINE       0     0     0
    gptid/209e8c9c-ff66-6f6a-e38b-9045c0b6c3ec  ONLINE       0     0     0
    gptid/ea8b834a-0692-464b-fd29-a877bf8f7bb9  ONLINE       0     0     0
    gptid/cf35d740-ea0b-bae6-9e4f-b7a31d66ab1d  ONLINE       0     0     0
    gptid/fe908e73-c93b-72ed-d4bb-9eae78bcc5b6  ONLINE       0     0     0
    gptid/bdf03e4d-ba71-a4cc-dd90-edfd6446bac3  ONLINE       0     0     0
    gptid/302bacc1-273a-54c9-c8f9-f458640b0d60  ONLINE       0     0     0
    gptid/d94ea326-d5aa-f062-9662-953908ce0b53  ONLINE       0     0     0
  raidz2-1                                      ONLINE       0     0     0
    gptid/3c1b1d3b-3977-11e6-b1f0-0025902b035a  ONLINE       0     0     0
    gptid/3ec0ba4a-3977-11e6-b1f0-0025902b035a  ONLINE       0     0     0
    gptid/40d8b781-3977-11e6-b1f0-0025902b035a  ONLINE       0     0     0
    gptid/43387eae-3977-11e6-b1f0-0025902b035a  ONLINE       0     0     0
    gptid/45800439-3977-11e6-b1f0-0025902b035a  ONLINE       0     0     0
    gptid/47df2694-3977-11e6-b1f0-0025902b035a  ONLINE       0     0     0
  da2                                           ONLINE       0     0     0
pjensen
  • 41
  • 2
  • There's a feature in the works to allow this, however right now it's still a pull request on the illumos version of ZFS, so it could be a little while before it gets ported to FreeBSD (and longer before it gets shipped in a new release). See https://github.com/openzfs/openzfs/pull/251 for more details. – Dan Jun 07 '17 at 13:06

1 Answers1

2

Unfortunately you need to destroy and recreate your pool. You can use zfs send and zfs receive to move the data to other disks and back without losing any ZFS-specific information, but you have to move it nevertheless.

user121391
  • 577
  • 5
  • 16
  • If we want to remove da2 (say 2GB) from the pool, we must first move the whole pool (say 200GB) to another pool (say another 200GB)? Seems not easy if not impossible to work with zfs when we have already used >50% of disk space. – Beeno Tung May 26 '21 at 15:23