I recently converted a mirrored mdadm partition on Ubuntu 16.04 to a zfs mirrored pool, or so I thought. After a few days of using the zfs pool, I rebooted the system and the zfs pool disappeared. "zpool list" does not list the lost zpool. It only lists another zpool that I have.
My questions are:
- Can I recover the missing ZFS pool?
- How to completely remove the partitions from mdadm, so this doesn't happen again?
/proc/mdstat shows a new md device using the partitions that I had used for the zfs pool:
md127 : inactive sdb3[1] sda3[0]
2047737856 blocks super 1.2
Before my attempt to convert this device to ZFS, it showed up in /proc/mdstat like this:
md3 : active raid1 sdb3[1] sda3[0]
1023868928 blocks super 1.2 [2/2] [UU]
bitmap: 0/8 pages [0KB], 65536KB chunk
To convert the partition to ZFS, I did:
- comment-out the entry from /etc/mdadm/mdadm.conf
- stop the raid partition with an mdadm command (I didn't note down the exact command that I used).
/etc/mdadm/mdadm.conf contains:
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root@localhost
# definitions of existing MD arrays
ARRAY /dev/md2 UUID=345b919e:df7ab6fa:a4d2adc2:26fd5302
#ARRAY /dev/md/3 metadata=1.2 UUID=a276579e:ac9ae714:8e35cebf:75ca9fad name=...