4

I have created a zpool while booted on a Linux Mint liveCD (with all the ZFS packages temp apt-installed) and created a zpool with command line containing ashift=9 because my ST4000NM0033 drives (8 each) have 512B sectors. Also created some ZFS filesystems on the pool while in LiveCD

While still running liveCD, I was then able to verify the pool was using ashift=9 by running zdb -e -C pool0 | grep ashift . I had to use the -e -C pool0 options because without them I was getting cannot open /etc/zfs/zpool.cache error

But once I installed and rebooted into the real OS, which is ZFS on root, and re-ran zdb | grep ashift it reports ashift=12

I am also using LUKS on under the vdevs. Each one has a detached header and keyfile and I boot the system from USB key with grub/efi/boot on it.

The zpool is a 2x stripped 4-drive RAIDZ1 arrangement.

zpool details:

click for detail pic. it would not paste correctly

Here's the results of zdb on running system

version: 5000
name: 'pool0'
state: 0
txg: 331399
pool_guid: 4878817387727202324
errata: 0
hostname: 'shop'
com.delphix:has_per_vdev_zaps
vdev_children: 1
vdev_tree:
    type: 'root'
    id: 0
    guid: 4878817387727202324
    children[0]:
        type: 'raidz'
        id: 0
        guid: 4453362395566037229
        nparity: 1
        metaslab_array: 138
        metaslab_shift: 36
        ashift: 12
        asize: 7996794994688
        is_log: 0
        create_txg: 4
        com.delphix:vdev_zap_top: 129
        children[0]:
            type: 'disk'
            id: 0
            guid: 17425041855122083436
            path: '/dev/mapper/luks_root_sda'
            whole_disk: 0
            DTL: 179
            create_txg: 4
            com.delphix:vdev_zap_leaf: 130
        children[1]:
            type: 'disk'
            id: 1
            guid: 14306620094487281535
            path: '/dev/mapper/luks_root_sdb'
            whole_disk: 0
            DTL: 178
            create_txg: 4
            com.delphix:vdev_zap_leaf: 131
        children[2]:
            type: 'disk'
            id: 2
            guid: 16566898459604505385
            path: '/dev/mapper/luks_root_sdc'
            whole_disk: 0
            DTL: 177
            create_txg: 4
            com.delphix:vdev_zap_leaf: 132
        children[3]:
            type: 'disk'
            id: 3
            guid: 542095292802891028
            path: '/dev/mapper/luks_root_sdd'
            whole_disk: 0
            DTL: 176
            create_txg: 4
            com.delphix:vdev_zap_leaf: 133
        children[4]:
            type: 'disk'
            id: 4
            guid: 14142266371747430354
            path: '/dev/mapper/luks_root_sde'
            whole_disk: 0
            DTL: 175
            create_txg: 4
            com.delphix:vdev_zap_leaf: 134
        children[5]:
            type: 'disk'
            id: 5
            guid: 9998698084287190219
            path: '/dev/mapper/luks_root_sdf'
            whole_disk: 0
            DTL: 174
            create_txg: 4
            com.delphix:vdev_zap_leaf: 135
        children[6]:
            type: 'disk'
            id: 6
            guid: 9268711926727287907
            path: '/dev/mapper/luks_root_sdg'
            whole_disk: 0
            DTL: 173
            create_txg: 4
            com.delphix:vdev_zap_leaf: 136
        children[7]:
            type: 'disk'
            id: 7
            guid: 16360862201213710466
            path: '/dev/mapper/luks_root_sdh'
            whole_disk: 0
            DTL: 172
            create_txg: 4
            com.delphix:vdev_zap_leaf: 137
features_for_read:
    com.delphix:hole_birth
    com.delphix:embedded_data

UPDATE: Seems checking ashift value directly on the device shows a expected ashift=9 value. Not sure why upper-level value is different

zdb -l /dev/mapper/luks_root_sda

LABEL 0

version: 5000
name: 'pool0'
state: 0
txg: 2223
pool_guid: 13689528332972152746
errata: 0
hostname: 'shop'
top_guid: 8586701185874218688
guid: 11289841240384277392
vdev_children: 2
vdev_tree:
    type: 'raidz'
    id: 0
    guid: 8586701185874218688
    nparity: 1
    metaslab_array: 142
    metaslab_shift: 37
    ashift: 9
    asize: 15901962272768
    is_log: 0
    create_txg: 4
    children[0]:
        type: 'disk'
        id: 0
        guid: 11289841240384277392
        path: '/dev/mapper/luks_root_sda'
        whole_disk: 0
        create_txg: 4
    children[1]:
        type: 'disk'
        id: 1
        guid: 7916996642850715828
        path: '/dev/mapper/luks_root_sdb'
        whole_disk: 0
        create_txg: 4
    children[2]:
        type: 'disk'
        id: 2
        guid: 5366943858334839242
        path: '/dev/mapper/luks_root_sdc'
        whole_disk: 0
        create_txg: 4
    children[3]:
        type: 'disk'
        id: 3
        guid: 3110382675821028014
        path: '/dev/mapper/luks_root_sdd'
        whole_disk: 0
        create_txg: 4
features_for_read:
    com.delphix:hole_birth
    com.delphix:embedded_data
labels = 0 1 2 3
  • Can you issue `zdb -e -C pool0` (ie: not using cache file) on your running system and paste output here? – shodanshok Dec 11 '19 at 16:35
  • I get error `zdb: can't open 'pool0': File exists` – GenerationTech Dec 11 '19 at 17:57
  • Someone else mentioned checking the `ashift` value on the devices directly by using `zdb -l /dev/DEVICE | grep ashift` commands. When I do that, I see that each device does have an `ashift=9` value as expected. Maybe the pool-level value of ashift can be different than the lower-level vdev value? Maybe the pool-level value is just the default value that will be used as new vdevs are created and not overridden by a command-line `-o ashift=9` option? – GenerationTech Dec 11 '19 at 19:13

1 Answers1

1

This problem was caused by a stale /etc/zfs/zpool.cache that was copied into place when I replaced the 8x1TB setup with the new 8x4TB array and copied the original / root back into place.

Seems ZFS did not update zpool.cache and zdb -l reads from that file whereas zdb -l /dev/DEVICE | grep ashift reads from the zdev directly and showed the expected ashift=9 value.

To correct the entire problem, I deleted zpool.cache and executed zpool set cachefile=/etc/zfs/zpool.cache pool0