I am using FreeNas 11.2 U5.
One disk configured as Raidz-2 has been damaged and a new disk has been installed to replace it.
However, by mistake, Volume Manager created a new stripe disk.
So, There are...
- A damaged Raidz2 volume with one disk (Originally 4 disks)
- a single-disk Stripe volume created.
[jehos@freenas ~]$ sudo zpool status
pool: MAIN
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scan: scrub repaired 0 in 0 days 06:48:21 with 0 errors on Sun Jun 16 06:48:24 2019
config:
NAME STATE READ WRITE CKSUM
MAIN DEGRADED 0 0 0
raidz2-0 DEGRADED 0 0 0
gptid/3cbffd2d-e366-11e3-a67c-c8cbb8c95fc0 ONLINE 0 0 0
gptid/3d98c268-e366-11e3-a67c-c8cbb8c95fc0 ONLINE 0 0 0
16493801384591910209 OFFLINE 0 0 0 was /dev/gptid/05be9493-e148-11e5-8ff4-c8cbb8c95fc0
gptid/3f43ab6c-e366-11e3-a67c-c8cbb8c95fc0 ONLINE 0 0 0
gptid/4fb8093c-ae3d-11e9-bbd1-c8cbb8c95fc0 ONLINE 0 0 0
I try to remove Stripes volume, but failed.
$ sudo zpool detach MAIN gptid/4fb8093c-ae3d-11e9-bbd1-c8cbb8c95fc0
cannot dettach gptid/4fb8093c-ae3d-11ebd1-c8cb8c95fc0: only applicable to mirror and refitting vdevs
If i force a stripe to remove a configured disk, the entire pool may be broken.
How do I safely remove only the accidentally created stripe volume?