2

I must be missing the boat. I'm trying to add more space to a virtual CentOS 7 server XFS partition running inside VMware. I've added 10 GB of space to a drive inside vSphere for the guest. The CentOS 7 server recognizes it cannot seem to get the LVM to recognize it. I'm sure it is something simple I've overlooked but I need another set of eyes to point me in the correct direction. I've followed this, but still not successful.

[root@xxxxxxx ~]# dmesg |grep sd
[    1.057672] sd 1:0:0:0: [sda] 125829120 512-byte logical blocks: (64.4 GB/60.0 GiB)
[    1.057708] sd 1:0:0:0: [sda] Write Protect is off
[    1.057712] sd 1:0:0:0: [sda] Mode Sense: 31 00 00 00
[    1.057733] sd 1:0:0:0: [sda] Cache data unavailable
[    1.057735] sd 1:0:0:0: [sda] Assuming drive cache: write through
[    1.058000]  sda: sda1 sda2
[    1.058164] sd 1:0:0:0: [sda] Attached SCSI disk
[    1.425159] Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
[    1.503898] sd 1:0:0:0: Attached scsi generic sg0 type 0
[    1.635203] XFS (sda1): Mounting V5 Filesystem
[    1.683734] XFS (sda1): Ending clean mount

[root@xxxxxx ~]# lsblk 
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   60G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   49G  0 part 
  ├─rootvg-root 253:0    0 45.1G  0 lvm  /
  └─rootvg-swap 253:1    0  3.9G  0 lvm  [SWAP]

[root@xxxxxxx ~]# pvscan
  PV /dev/sda2   VG rootvg          lvm2 [49.00 GiB / 4.00 MiB free]
  Total: 1 [49.00 GiB] / in use: 1 [49.00 GiB] / in no VG: 0 [0   ]

[root@xxxxxx ~]# vgs
  VG     #PV #LV #SN Attr   VSize  VFree
  rootvg   1   2   0 wz--n- 49.00g 4.00m

[root@xxxxxxx ~]# lvs
  LV   VG     Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root rootvg -wi-ao---- 45.12g                                                    
  swap rootvg -wi-ao----  3.88g

[root@xxxxxxx ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/rootvg-root   46G  1.4G   44G   3% /
devtmpfs                 1.9G     0  1.9G   0% /dev
tmpfs                    1.9G     0  1.9G   0% /dev/shm
tmpfs                    1.9G  8.5M  1.9G   1% /run
tmpfs                    1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/sda1               1014M  139M  876M  14% /boot
tmpfs                    380M     0  380M   0% /run/user/38679
Cory Knutson
  • 1,876
  • 13
  • 20
  • Did you already extend the LV? What was your original space ont he drive so we can compare? https://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager/lv_extend.html – Patrick Jun 07 '17 at 14:46
  • my original partition was 50GB. – fabricatedmind Jun 07 '17 at 15:20
  • Sounds like you need to extend the LV then. You can follow that guide I posted to do so. Adding disk to your PV, then to your volume group, does not automatically add it to the filesystem. You'll need to extend your LV, then use `xfs_growfs` to fill the LV. – Patrick Jun 07 '17 at 16:23
  • Forgive my ignorance but doesn't the pv need to show free space to extend the vg and lv? – fabricatedmind Jun 07 '17 at 16:57
  • Ah, I missed that. You need to use `pvcreate` on that new disk. This guide has it: https://www.rootusers.com/how-to-increase-the-size-of-a-linux-lvm-by-adding-a-new-disk/ – Patrick Jun 07 '17 at 17:25
  • Yeah, so this isn't a "new disk". I went inside vcenter and update the current disk to give it 10 more GB. In theory I should be able to use xfs_growfs / and it should add the free space left on to the lvm. obviously I'm doing something incorrectly. – fabricatedmind Jun 07 '17 at 18:05
  • 1
    You need to edit your partition table to reflect the new size before LVM will be able to see it. –  Jun 07 '17 at 18:16
  • Okay, so to clarify, `sda` was originally 50GB but you added 10GB to make it 60GB, correct? If that's how I'm reading it then yoonix is correct, you need to make that `sda2` partition size with the extra 10GB, then LVM should reflect as such. I'm not sure why I thought it was a separate disk. Sorry for the confusion. – Patrick Jun 07 '17 at 19:03

1 Answers1

1

Thanks guys. Figured it out using a Virtualbox vm so I didn't break anything. Anyhow, the steps were as followed once you added space to your vmware disk through vcenter/vsphere

fdisk /dev/sda - delete and re-add partition and make it an LVM. It's probably good practice to make backups before this step

reboot - had to reboot for the new partition table to be updated

pvresize /dev/sda

lvresize /dev/mapper/cl-root /dev/sda

xfs_growfs / -d

This worked but I was under the impression you could expand xfs partitions in realtime with no reboot