I have a ZFS pool that currently occupies 100Gb. I increased the disk size to 150Gb, but I can't seem to get the ZFS use the entire disk.
I have a the same exact issue yesterday with another server, and there a certain mixture of zpool set autoexpand=on
, zpool export|import
, zpool online -e
and reboots allowed me to fix it. But no matter what I do, it doesn't work in the current server
The device with the issue is sdb, you can see from lsblk below that the partition is only 100Gb out of available 150Gb.
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 150G 0 disk
├─sdb1 8:17 0 100G 0 part
└─sdb9 8:25 0 8M 0 part
root@http-server-2:/home# parted -l
Model: Google PersistentDisk (scsi)
Disk /dev/sdb: 161GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 107GB 107GB zfs zfs-01a59d03c9294944
9 107GB 107GB 8389kB
UPDATE
more data:
zpool list
# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
lxd 13.9G 394K 13.9G - 0% 0% 1.00x ONLINE -
zdata 99.5G 82.7G 16.8G - 49% 83% 1.00x ONLINE -
zpool status
# zpool status
pool: lxd
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
lxd ONLINE 0 0 0
/var/lib/lxd/disks/lxd.img ONLINE 0 0 0
errors: No known data errors
pool: zdata
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
zdata ONLINE 0 0 0
sdb ONLINE 0 0 0
autoexpand
# zpool get autoexpand
NAME PROPERTY VALUE SOURCE
lxd autoexpand off default
zdata autoexpand on local
expandsize
# zpool get expandsize zdata
NAME PROPERTY VALUE SOURCE
zdata expandsize - -
fdisk
# fdisk -l /dev/sdb
Disk /dev/sdb: 150 GiB, 161061273600 bytes, 314572800 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 0DA2A1D7-9A44-2E4C-856A-BB9EAEB283E0
Device Start End Sectors Size Type
/dev/sdb1 2048 209696767 209694720 100G Solaris /usr & Apple ZFS
/dev/sdb9 209696768 209713151 16384 8M Solaris reserved 1
I am on Google cloud, and this is an ubuntu VM instance, the zfs is on a second disk that I attached to the server, through "Google Cloud Platform - Compute Engine".
What's the right way to do expand the ZFS partition in this case?
SOLUTION
Eventually I got it to work following @ewwhite answer below. For completeness, here is how to delete the extra partition#9:
parted /dev/sdb rm 9
parted /dev/sdb resizepart 1 100%
and then a bunch of zpool online -e => export pool => import pool. and it worked!