1

I am using FreeNas 11.2 U5.

One disk configured as Raidz-2 has been damaged and a new disk has been installed to replace it.

However, by mistake, Volume Manager created a new stripe disk.

So, There are...

  1. A damaged Raidz2 volume with one disk (Originally 4 disks)
  2. a single-disk Stripe volume created.
[jehos@freenas ~]$ sudo zpool status
  pool: MAIN
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
  scan: scrub repaired 0 in 0 days 06:48:21 with 0 errors on Sun Jun 16 06:48:24 2019
config:

        NAME                                            STATE     READ WRITE CKSUM
        MAIN                                            DEGRADED     0     0     0
          raidz2-0                                      DEGRADED     0     0     0
            gptid/3cbffd2d-e366-11e3-a67c-c8cbb8c95fc0  ONLINE       0     0     0
            gptid/3d98c268-e366-11e3-a67c-c8cbb8c95fc0  ONLINE       0     0     0
            16493801384591910209                        OFFLINE      0     0     0  was /dev/gptid/05be9493-e148-11e5-8ff4-c8cbb8c95fc0
            gptid/3f43ab6c-e366-11e3-a67c-c8cbb8c95fc0  ONLINE       0     0     0
          gptid/4fb8093c-ae3d-11e9-bbd1-c8cbb8c95fc0    ONLINE       0     0     0

I try to remove Stripes volume, but failed.

$ sudo zpool detach MAIN gptid/4fb8093c-ae3d-11e9-bbd1-c8cbb8c95fc0
cannot dettach gptid/4fb8093c-ae3d-11ebd1-c8cb8c95fc0: only applicable to mirror and refitting vdevs

If i force a stripe to remove a configured disk, the entire pool may be broken.

How do I safely remove only the accidentally created stripe volume?

s_jeho
  • 11
  • 2

1 Answers1

1

Back up your pool!

You're close to losing data, and any further mishaps could put you even closer to data loss, or push you over the brink.

You can try:

zpool remove -n MAIN gptid/4fb8093c-ae3d-11e9-bbd1-c8cbb8c95fc0

but I don't think that will work. The -n option tells zpool to just outline the result of what would be done, but don't actually do it.

-n     Do not actually perform the removal ("no-op"). Instead, print the estimated amount of memory
        that will be used by the mapping table after the removal completes. This is nonzero only for
        top-level vdevs.

If it looks like it would be allowed, try it again without the -n.

Unfortunately, I suspect that you will need to backup your entire pool, then destroy the pool, re-create it, and restore from backup. In general, it is not possible to remove VDEVs from a ZFS pool without destroying the pool and re-creating it.

Jim L.
  • 655
  • 4
  • 11