Questions tagged [zfs]

ZFS is a modern file system and volume manager originally developed by Sun Microsystems and licensed under the CDDL. It is a copy-on-write file system with support for large storage arrays, protection against corruption, snapshots, clones, compression, deduplication and NFSv4 ACLs. An open-source fork of ZFS can be found at http://open-zfs.org/ , which is supported by ZFSonlinux.org, illumos.org and ZFS developers in the FreeBSD & Mac OS X communities.

ZFS is supported out of the box on a number of operating systems:

  • Solaris 10
  • Oracle Solaris 11 Express
  • FreeBSD
  • NexentaStor
  • illumos - specifically, illumos-based distributions, like:
    • Nexenta's illumian
    • Joyent's SmartOS (server OS with strong focus on virtualization)
    • OmniTI's OmniOS (general purpose server OS)
    • OpenIndiana (general purpose desktop/server OS)

Due to license incompatibilities, the CDDL licensed ZFS code cannot be distributed as part of the GPL licensed Linux kernel. Other alternatives methods are available for running ZFS on Linux:

1391 questions
1
vote
2 answers

Samba share available space is smaller than zfs poll size

I have a FreeNas server with zfs pool. Configuration: 6 x 2TB disks in RAIDZ2. 15.5 TB raw space of disks. # zpool list -v NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT freenas-boot …
XorOrNor
  • 241
  • 1
  • 3
  • 8
1
vote
0 answers

Why does this systemd service not run at the right time (loading encryption keys from a network drive which are required for lxc containers)?

In Debian with systemd, I use zfs and lxc. My zfs datasets are encrypted and their keys can be loaded from a network host via my /etc/zfs/zfs-load-key.sh script. My LXC containers are started by lxc.service. Loading the keys requires the network up…
divB
  • 568
  • 1
  • 7
  • 23
1
vote
2 answers

raidz1 6x4TB = 16.9t?

I an running Ubuntu server with the latest version of zfs-utils. I installed 6x4TB disks (lsblk -b shows all disks partition 1 size=4000787030016) and created a raidz1 configuration with all 6 disks. The raidz calculator website said I should see…
1
vote
0 answers

How to boot ZFS root filesystem after setting dnodesize = auto (making grub unable to read the disks)

I have a proxmox (v5.4 I think) installation on top of ZFS in a server with 6 disks. There are 2 pools: rpool in a mirror of two SSDs that has the proxmox root filesystem and some containers and zvols. And the other 4 disks are in another pool (HDD)…
Héctor
  • 141
  • 1
  • 6
1
vote
0 answers

ZFS: inconsistent allocated space after changing recordsize

some info/setup: pool was original created with no datasets + default recordsize (128K) + ashift=12 + no compression + using default checksum (which I believe is fletcher4) I copied some videos (destination was /mnt/mystorage/myvideos) (the total…
mrjayviper
  • 187
  • 1
  • 11
1
vote
0 answers

proxmox zfs pool not importing

Today I restarted my proxmox server which has a zfs pool for my data. The zfs pool did not come back up after the restart and is not visible in the proxmox UI. When I try importing the pool manually I get the following result: zpool import…
1
vote
1 answer

Intel DH895XCC Series QAT and ZFS (2.0.0) - custom initramfs with dracut

We would like to run ZFS with QAT offloading for compressions and checksums. The distribution is Centos 8.2 with the stock kernel: [root@dellqat ~]# uname -a Linux dellqat 4.18.0-193.19.1.el8_2.x86_64 #1 SMP Mon Sep 14 14:37:00 UTC 2020 x86_64…
fstrati70
  • 11
  • 1
1
vote
1 answer

zfs cannot create snapshot, out of space

I have a disk with these layers: sata disk, luks, zpool, ext4 The ext4 fs was created with these commands: cryptsetup -v luksFormat /dev/sdb cryptsetup luksOpen /dev/sda store02 zpool create zstore02 /dev/mapper/store02 zfs create -V 1600G…
nagylzs
  • 759
  • 3
  • 12
  • 23
1
vote
0 answers

znapzend no longer works after upgrade to Ubuntu Server 20.04.1

Ubuntu Server was upgraded from 18.04 LTS to 20.04.1 LTS and subsequently znapzend service fails to start with the reason being: znapzend[3436]: ListUtil.c: loadable library and perl binaries are mismatched (got handshake key 0xde00080, needed…
mrdrthom
  • 11
  • 1
1
vote
0 answers

One of my pools disapears after reboot

One of my pools disappears after reboot. I have Debian Buster Root on ZFS and my bpool and rpool pools are OK. root@ZFSTEST:~# zpool status mkpool pool: mkpool state: ONLINE scan: scrub repaired 0B in 0 days 00:22:38 with 0 errors on Sun Oct 11…
1
vote
0 answers

Recently my NFS RAID performance has gone from very good to basically unusable

First, thanks for any help anyone is willing to give! I have two ubuntu 20.04 servers with two different software RAIDs on them. One is music and movies (RAID6) and the other is personal docs and photos (RAIDZ2). I mount and manage these from my…
GnomGnom
  • 11
  • 3
1
vote
1 answer

ZFS - is it safe to remove files served by ZFS during a resilver/rebuild?

I have a Nexenta ZFS system serving a large NFS volume (using ~85% of 250 TB). One of the 70 disks failed over a week ago, and the system is resilvering a hot spare without issues (other than the large performance losses due to the intense…
ascendants
  • 163
  • 6
1
vote
2 answers

zfs is mounted after mongod starts: how do I set the boot order?

Fairly simple problem, I think, but I can't seem to get my head around a fix. I'm on Ubuntu 18.04.3 and mounting zfs volumes with mongo-data at boot correctly with the config below in…
1
vote
1 answer

How can I monitor the reliability of connections to network shares?

I can use any of Windows Pro or Server Standard, or CentOS to do this monitoring (I guess). It seems there are some complex issues going on with AWS Deadline jobs which I don't expect anyone to know about. The jobs fail a lot. The smoking gun (I…
bluesquare
  • 137
  • 1
  • 1
  • 10
1
vote
0 answers

Should I partition or format a disk before replacing a FAULTED disk in a ZFS pool?

I recently had a FAULTED disk in a ZFS pool and wanted to replace it. I simply put the disk in and let it resilver. Afterwards, I noticed the disk was partitioned differently than the other disks and I could not boot anymore. This turned out to be a…
Wouter
  • 121
  • 1
  • 4