2

I have a single-drive pool consisting of a 2TB HDD. I have two 1TB drives I can use in mirror - I'd like to add redundancy to my pool.

How can I attach these two drives as a single device so that they can be used as a striped mirror?

Let's call my 2TB drive is sda, the blank 1TB drives are sdb and sdc.

I tried: zfs attach tank sda sdb sdc but that says too many arguments.

I tried: zfs attach tank sda sdb but that says device is too small

I tried: zfs attach tank sda sdb+sdc but that says no such device in /dev

I tried: zfs attach tank sda sdb,sdc but that says no such device in /dev

I've read the manual and searched the web - I am out of ideas.

I guess I could try to create a new striped pool from these two 1TB drives, create a zvol inside and use that as a mirror for my primary pool, but this is probably going to give me not enough capacity for a mirror anyway, plus a lot of unneeded overhead.

How can I do this?

unfa
  • 171
  • 1
  • 8

2 Answers2

4

This can not be directly done via ZFS. From the man page:

Virtual devices cannot be nested, so a mirror or raidz virtual device can only contain files or disks. Mirrors of mirrors (or other combinations) are not allowed.

My suggestion is to create a new pool comprising the two 1 TB disks and use something as syncoid to frequently send the first pool's content to the new pool.

--- WARNING: clunky workaround below! Do NOT use if not REALLY sure!!! ---

Anyway, if you really want to add the two 1 TB disks as a mirror of the first 2 TB disks, a workaround can be tried. You can use device-mapper (in its LVM form) to concatenate the two disks and attach the resulting volume to the 2 TB device. For example:

pvcreate /dev/sdb
pvcreate /dev/sdc
vgcreate zvg /dev/sdb
vgextend zvg /dev/sdc
lvcreate zvg --name zdev -l +100%FREE
zpool attach tank /dev/sda /dev/zvg/zdev
zpool status

You can achieve a similar (even better) result with mdadm, creating a RAID0 device and attaching it to the zpool:

mdadm --create md127 --level=0 --raid-devices=2 /dev/sdb /dev/sdc
zpool attach tank /dev/sda /dev/md127
zpool status

This approach is not recommended. Use it at your own risk.

shodanshok
  • 47,711
  • 7
  • 111
  • 180
  • 1
    I want to see some of the Lego projects you built as a kid. – Andrew Henle Feb 26 '18 at 23:07
  • There's a typo: `zpool attack`. I can't correct it without performing bigger edits. – unfa Feb 27 '18 at 11:31
  • What are the risks associated with this approach? I guess my mirror could be lost from the ZFS pool - right? Can anything happen to my main 2TB disk is this configuration? I will be backing up the pool to a separate one anyway. – unfa Feb 27 '18 at 11:32
  • @unfa corrected, thanks for reporting! Regarding the risks: a) it is a clunky configuration which you need to remember when operating on you system and b) any problem with the md array *or* one of the two disks will degrade the zpool. Anyway, your 2TB disk should be safe. Sidenote: from the two approaches, I will probably choose the md-based one rather than LVM. – shodanshok Feb 27 '18 at 12:11
  • I'll try to go with the md solution and see how it works. Thanks! – unfa Feb 27 '18 at 18:48
  • It is resilvering and both 1TB disks are under the same write load, so it's striped - just what I wanted! – unfa Feb 27 '18 at 19:07
  • Now I have the 1+1 TB array disappear, probably the USB failed for a while. I wonder if I can restore the array in `md`, without recreating it from scratch? – unfa Mar 05 '18 at 23:28
  • Have you tried to reboot the machine? `md` should be able to automatically scan and activate the array. Otherwise, report back some information using `mdadm -E /dev/sdb; mdadm -E /dev/sdc`. Anyway, using USB disks for md/ZFS arrays is a **bad** idea. – shodanshok Mar 06 '18 at 08:30
  • Rebooting didn't help. Looks like md can't identify the array disks after rebooting and gparted identifies both as ZFS. So looks like it's not a USB fault, but maybe a problem with defining the array in a persistent manner. I have no definitiion of this array in /etc/mdadm/mdadm.conf file, I'm trying to figure out how to do it so the array will persist between reboots. I know USB storage is notsuitable for mission critical storage. I'm just playing around, and I guess some redundancy in my ZFS pool is better than no redundancy, until I get a second SATA HDD to attach a proper mirror. – unfa Mar 06 '18 at 23:35
  • I've created a sparate question for the md array problem: https://serverfault.com/questions/900363/mdadm-array-disappears-after-system-reboot – unfa Mar 06 '18 at 23:49
1

To add the two new disks to the pool, you can use zpool add tank mirror sdb sdc but this will add the new pair as a mirror, and stripe it with the existing disk. (This doesn't add any redundancy).

You would need to use the zpool attach command to append the disks to the existing vdev, but you can't use a mirrored-pair to back a single disk, you would have to add a new 2TB disk to make it a mirrored pair.

Andrew
  • 2,142
  • 2
  • 19
  • 25