I'm using FreeNAS-11.2-U4.1. The server is used for storing VMWare vSphere virtual machines. There are two zvols: Lab and Edari. Both of them belong to the same pool, SSD-Storage.
The problem is that vSphere can't mount one of the zvols, Edari. So the virtual machines stored in this pool are inaccessible. But the other is fine and I can browse its files.
I get this alert on the web interface of FreeNAS (which I'm not sure if it is related to the problem, because the zvol Edari doesn't belong to it):
The volume Pool-1.8SSD state is UNKNOWN: Wed, 14 Aug 2019 05:59:38 GMT
But zpool status says nothing about this pool:
root@Storage[~]# zpool status
pool: SSD-Storage
state: ONLINE
scan: scrub repaired 0 in 0 days 00:48:30 with 0 errors on Wed Aug 14 12:50:59 2019
config:
NAME STATE READ WRITE CKSUM
SSD-Storage ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
gptid/ec475918-925c-11e9-af9b-f4ce46a6411d ONLINE 0 0 0
gptid/f1ed0bd1-925c-11e9-af9b-f4ce46a6411d ONLINE 0 0 0
gptid/f796acd9-925c-11e9-af9b-f4ce46a6411d ONLINE 0 0 0
errors: No known data errors
pool: freenas-boot
state: ONLINE
scan: scrub repaired 0 in 0 days 00:00:05 with 0 errors on Wed Aug 7 03:45:05 2019
config:
NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
da0p2 ONLINE 0 0 0
errors: No known data errors
And this is what happens when I try to import the pools:
root@Storage[~]# zpool import
root@Storage[~]#
This pool is not even listed here. How could ZFS completely forget about a pool, such that it never existed? I searched through forums an I found out that this may be due to using RAID, but I don't use RAID. This is what gpart shows:
root@Storage[~]# gpart show
=> 40 488326960 da0 GPT (233G)
40 1024 1 freebsd-boot (512K)
1064 488308736 2 freebsd-zfs (233G)
488309800 17200 - free - (8.4M)
=> 40 1953459552 da1 GPT (931G)
40 88 - free - (44K)
128 4194304 1 freebsd-swap (2.0G)
4194432 1949265160 2 freebsd-zfs (929G)
=> 40 1953459552 da2 GPT (931G)
40 88 - free - (44K)
128 4194304 1 freebsd-swap (2.0G)
4194432 1949265160 2 freebsd-zfs (929G)
=> 40 1953459552 da3 GPT (931G)
40 88 - free - (44K)
128 4194304 1 freebsd-swap (2.0G)
4194432 1949265160 2 freebsd-zfs (929G)
And I found this on /var/logs/debug.log:
Aug 14 12:40:19 Storage uwsgi: [storage.models:123] Exception on retrieving disks for Pool-1.8SSD: list index out of range
This is the output of zfs list:
root@Storage[~]# zfs list
NAME USED AVAIL REFER MOUNTPOINT
SSD-Storage 1.76T 88.0G 117K /mnt/SSD-Storage
SSD-Storage/Cload 60.9G 149G 74.6K -
SSD-Storage/Edari 660G 571G 178G -
SSD-Storage/Lab 1.04T 923G 232G -
SSD-Storage/iocage 858K 88.0G 155K /mnt/SSD-Storage/iocage
SSD-Storage/iocage/download 117K 88.0G 117K /mnt/SSD-Storage/iocage/download
SSD-Storage/iocage/images 117K 88.0G 117K /mnt/SSD-Storage/iocage/images
SSD-Storage/iocage/jails 117K 88.0G 117K /mnt/SSD-Storage/iocage/jails
SSD-Storage/iocage/log 117K 88.0G 117K /mnt/SSD-Storage/iocage/log
SSD-Storage/iocage/releases 117K 88.0G 117K /mnt/SSD-Storage/iocage/releases
SSD-Storage/iocage/templates 117K 88.0G 117K /mnt/SSD-Storage/iocage/templates
freenas-boot 760M 224G 64K none
freenas-boot/ROOT 760M 224G 29K none
freenas-boot/ROOT/Initial-Install 1K 224G 756M legacy
freenas-boot/ROOT/default 760M 224G 756M legacy
One last thing is that, I frequently find this line in my /var/log/messages:
ctld: connect(2) failed for 172.19.20.11: Connection refused
172.19.20.11 is my FreeNAS server.
Could you help me to find out what's wrong with the zvol Edari?