13

I have a ZFS pool that currently occupies 100Gb. I increased the disk size to 150Gb, but I can't seem to get the ZFS use the entire disk.

I have a the same exact issue yesterday with another server, and there a certain mixture of zpool set autoexpand=on, zpool export|import, zpool online -e and reboots allowed me to fix it. But no matter what I do, it doesn't work in the current server

The device with the issue is sdb, you can see from lsblk below that the partition is only 100Gb out of available 150Gb.

# lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sdb       8:16   0  150G  0 disk
├─sdb1    8:17   0  100G  0 part
└─sdb9    8:25   0    8M  0 part

root@http-server-2:/home# parted -l
Model: Google PersistentDisk (scsi)
Disk /dev/sdb: 161GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number  Start   End    Size    File system  Name                  Flags
 1      1049kB  107GB  107GB   zfs          zfs-01a59d03c9294944
 9      107GB   107GB  8389kB

UPDATE

more data:

zpool list

# zpool list
NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
lxd    13.9G   394K  13.9G         -     0%     0%  1.00x  ONLINE  -
zdata  99.5G  82.7G  16.8G         -    49%    83%  1.00x  ONLINE  -

zpool status

# zpool status
  pool: lxd
 state: ONLINE
  scan: none requested
config:

        NAME                          STATE     READ WRITE CKSUM
        lxd                           ONLINE       0     0     0
          /var/lib/lxd/disks/lxd.img  ONLINE       0     0     0

errors: No known data errors

  pool: zdata
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        zdata       ONLINE       0     0     0
          sdb       ONLINE       0     0     0

autoexpand

# zpool get autoexpand
NAME   PROPERTY    VALUE   SOURCE
lxd    autoexpand  off     default
zdata  autoexpand  on      local

expandsize

# zpool get expandsize zdata
NAME   PROPERTY    VALUE     SOURCE
zdata  expandsize  -         -

fdisk

# fdisk -l /dev/sdb
Disk /dev/sdb: 150 GiB, 161061273600 bytes, 314572800 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 0DA2A1D7-9A44-2E4C-856A-BB9EAEB283E0

Device         Start       End   Sectors  Size Type
/dev/sdb1       2048 209696767 209694720  100G Solaris /usr & Apple ZFS
/dev/sdb9  209696768 209713151     16384    8M Solaris reserved 1

I am on Google cloud, and this is an ubuntu VM instance, the zfs is on a second disk that I attached to the server, through "Google Cloud Platform - Compute Engine".

What's the right way to do expand the ZFS partition in this case?

SOLUTION

Eventually I got it to work following @ewwhite answer below. For completeness, here is how to delete the extra partition#9:

parted /dev/sdb rm 9
parted /dev/sdb resizepart 1 100%

and then a bunch of zpool online -e => export pool => import pool. and it worked!

justadev
  • 393
  • 2
  • 4
  • 20

2 Answers2

12

It's normal to have the partition 1/9 in ZFS. If ZFS thinks it is using a 'whole disk' then the partitions are created. This is the way non-multipath full disks should be treated.

The reason for this 8MB buffer space is to allow the use of slightly-different disk capacities in a physical setup. This isn't something you need to worry about when using the zpool online -e utility, as it rewrites the partition table during expansion.

Disk /dev/nvme0n1: 960.2 GB, 960197124096 bytes, 1875385008 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
Disk identifier: E63B403D-D140-A84B-99EB-56DEDC8B91E4


#         Start          End    Size  Type            Name
 1         2048   1875367935  894.3G  Solaris /usr &  zfs-aaba011d48bf00f6
 9   1875367936   1875384319      8M  Solaris reserve

The order should be something like:

  1. Rescan your disk: something like echo 1 > /sys/block/sdb/device/rescan.
  2. partprobe
  3. zpool online -e poolname sdb
  4. Reboot or reload ZFS module.
  5. zpool list
  6. Review the value of EXPANDSZ.
  7. zpool online -e poolname sdb

If this sequence doesn't work, just delete partition #9 and repeat the above.

ewwhite
  • 197,159
  • 92
  • 443
  • 809
  • That's just a solaris compatibility mode. *The whole disk* mode should have no partitions at all, it's called *dedicated mode*. – drookie Dec 20 '18 at 04:46
  • The ZFS developer I work with says whole disk mode will have the two partitions. It's preferred unless you have multipath devices comprising the pool. – ewwhite Dec 20 '18 at 05:42
  • Originally it won't unless the disk is a part of a root pool with SMI/vtoc8 labels, and zfs had this behavior even on Solaris 10 - it could use the *zfs dedicated mode* disks. So right now it's an urban legend. Even zfsonlinux developers state this as an **expected** but not *required* behavior (https://github.com/zfsonlinux/zfs/issues/94). Clearly the partitioning layer is redundant whan the disk is not a part of root pool or dual-triple-boot configurations. And I just don't see how is multipathing could be a part of the equation. – drookie Dec 20 '18 at 06:00
  • 1
    I tried this answer (without deleting of the sdb9), and it didn't work, even after a reboot. I updated the question with `zpool list` `zpool status` output. EXPANDZ is -, what does it mean? I am a little afraid to delete the partition #9, how can I know it is safe? In addition, how do I delete a partition that is in a disk that has ZFS on it? – justadev Dec 20 '18 at 09:18
  • by the way, `partprobe` does nothing, no output to stdout at all. Doesn't it suppose to detect that the partition is not using the whole disk? At least on the other server it did, but not in current. – justadev Dec 20 '18 at 09:50
  • For me, ensuring that autoexpand was on and running `partprobe` was enough. No reboot needed. – gpothier Feb 25 '22 at 20:58
4

Try command

sudo partprobe

I believe the partprobe will solve you problem too. https://linux.die.net/man/8/partprobe

You can always enable autoexpand on your pool too to avoid needing to do sudo zpool online -e my-pool sdb

zpool get autoexpand my-pool
sudo zpool set autoexpand=on my-pool

Full commands that might be useful

sudo partprobe -s
sudo partprobe
lsblk
zpool list
sudo zpool online -e <POOL NAME> <DEVICE NAME>
# zpool online -e mypoolname sdb
zpool list
  • This works, thanks very much! `sudo partprobe` is the key. No need to delete any partitions. – EM0 May 12 '23 at 08:25