I've a AKS cluster and I'm trying to resize the PVC used. Actually the PVC has a capacity of 5Gi and I already resized it to 25Gi:
> kubectl describe pv
Name: mypv
Labels: failure-domain.beta.kubernetes.io/region=northeurope
Annotations: pv.kubernetes.io/bound-by-controller: yes
pv.kubernetes.io/provisioned-by: kubernetes.io/azure-disk
volumehelper.VolumeDynamicallyCreatedByKey: azure-disk-dynamic-provisioner
Finalizers: [kubernetes.io/pv-protection]
StorageClass: default
Status: Bound
Claim: default/test-pvc
Reclaim Policy: Delete
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 25Gi
...
> kubectl describe pvc
Name: test-pvc
Namespace: default
StorageClass: default
Status: Bound
Volume: mypv
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/azure-disk
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 25Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: mypod
Events: <none>
But when I call "df -h" in mypod, it still shows me 5Gi (see /dev/sdc):
/ # df -h
Filesystem Size Used Available Use% Mounted on
overlay 123.9G 22.3G 101.6G 18% /
tmpfs 64.0M 0 64.0M 0% /dev
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/sdb1 123.9G 22.3G 101.6G 18% /dev/termination-log
shm 64.0M 0 64.0M 0% /dev/shm
/dev/sdb1 123.9G 22.3G 101.6G 18% /etc/resolv.conf
/dev/sdb1 123.9G 22.3G 101.6G 18% /etc/hostname
/dev/sdb1 123.9G 22.3G 101.6G 18% /etc/hosts
/dev/sdc 4.9G 4.4G 448.1M 91% /var/lib/mydb
tmpfs 1.9G 12.0K 1.9G 0% /run/secrets/kubernetes.io/serviceaccount
tmpfs 1.9G 0 1.9G 0% /proc/acpi
tmpfs 64.0M 0 64.0M 0% /proc/kcore
tmpfs 64.0M 0 64.0M 0% /proc/keys
tmpfs 64.0M 0 64.0M 0% /proc/timer_list
tmpfs 64.0M 0 64.0M 0% /proc/sched_debug
tmpfs 1.9G 0 1.9G 0% /proc/scsi
tmpfs 1.9G 0 1.9G 0% /sys/firmware
I already destroyed my pod and even my deployment but it still show 5Gi. Any idea how I can use the entire 25Gi in my pod?
SOLUTION
Thank you mario for the long response. Unfortunately the aks dasboard already showed me that the disk has 25GB. But calling the following returned 5GB:
az disk show --ids /subscriptions/<doesn't matter :-)>/resourceGroups/<doesn't matter :-)>/providers/Microsoft.Compute/disks/kubernetes-dynamic-pvc-27ee71a5-<doesn't matter> --query "diskSizeGb"
So I finally called az disk update --ids <disk-id> --size-gb 25
. Now, the command above returned 25 and I started my pod again. Since my pod uses Alpine Linux, it's not resizing the disk automatically and I had to do it manually:
/ # apk add e2fsprogs-extra
(1/6) Installing libblkid (2.34-r1)
(2/6) Installing libcom_err (1.45.5-r0)
(3/6) Installing e2fsprogs-libs (1.45.5-r0)
(4/6) Installing libuuid (2.34-r1)
(5/6) Installing e2fsprogs (1.45.5-r0)
(6/6) Installing e2fsprogs-extra (1.45.5-r0)
Executing busybox-1.31.1-r9.trigger
OK: 48 MiB in 31 packages
/ # resize2fs /dev/sdc
resize2fs 1.45.5 (07-Jan-2020)
Filesystem at /dev/sdc is mounted on /var/lib/<something :-)>; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 4
The filesystem on /dev/sdc is now 6553600 (4k) blocks long.
Note: In my pod I set the privileged-mode temporarely to true:
...
spec:
containers:
- name: mypod
image: the-image:version
securityContext:
privileged: true
ports:
...
Otherwise resize2fs failed and say's something like "no such device or similar" (sorry, don't know the exact error message anymore - forgot to copy).