7

I have a volume-group (vg) with about 127GB free space. I am trying to extend a logical volume to +50GB however i am getting

insufficient suitable allocatable extents

This is quite weird since there is enough space on the VG to allocate. Bellow you may find information regarding my LV setup:

root@server:~# df -h
Filesystem                        Size  Used Avail Use% Mounted on
/dev/dm-0                          19G  4.3G   15G  23% /
udev                               10M     0   10M   0% /dev
tmpfs                              19G  341M   19G   2% /run
tmpfs                              48G     0   48G   0% /dev/shm
tmpfs                             5.0M     0  5.0M   0% /run/lock
tmpfs                              48G     0   48G   0% /sys/fs/cgroup
/dev/mapper/data-lvm1   158G  135G   24G  86% /srv/mongodb/lvm1
/dev/mapper/data-lvm2  543G  509G   35G  94% /srv/mongodb/lvm2

root@server:~# lvs
  LV             VG    Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lvm1  data  -wi-ao---- 160.00g                                                    
  lvm2  data  -wi-ao---- 551.00g                                                    
  root  local -wi-ao----  19.31g                                                    
  swap  local -wi-ao----  11.18g                                                    

root@server:~# vgs
  VG    #PV #LV #SN Attr   VSize   VFree  
  data    2   2   0 wz--l- 838.24g 127.24g
  local   1   2   0 wz--n- 136.70g 106.21g

root@server:~# pvs
  PV         VG    Fmt  Attr PSize   PFree  
  /dev/sda1  local lvm2 a--  136.70g 106.21g
  /dev/sdb   data  lvm2 a--  279.36g 119.36g
  /dev/sdc   data  lvm2 a--  558.88g   7.88g

root@server:~# lvextend -L +50G /dev/data/lvm2 
  Insufficient suitable allocatable extents for logical volume lvm2: 10783 more required

root@server:~# vgscan 
  Reading all physical volumes.  This may take a while...
  Found volume group "data" using metadata type lvm2
  Found volume group "local" using metadata type lvm2

root@server:~# pvscan 
  PV /dev/sdb    VG data    lvm2 [279.36 GiB / 119.36 GiB free]
  PV /dev/sdc    VG data    lvm2 [558.88 GiB / 7.88 GiB free]
  PV /dev/sda1   VG local   lvm2 [136.70 GiB / 106.21 GiB free]
  Total: 3 [974.94 GiB] / in use: 3 [974.94 GiB] / in no VG: 0 [0   ]

root@server:~# lvscan 
  ACTIVE            '/dev/data/lvm1' [160.00 GiB] inherit
  ACTIVE            '/dev/data/lvm2' [551.00 GiB] inherit
  ACTIVE            '/dev/local/root' [19.31 GiB] inherit
  ACTIVE            '/dev/local/swap' [11.18 GiB] inherit
giomanda
  • 1,754
  • 4
  • 21
  • 30
  • take backup of your data & then run sudo e2fsck -f /dev/data/lvm2 and try again Do not run fsck on a live or mounted file system. fsck is used to check and optionally repair one or more Linux file systems. Running fsck on a mounted filesystem can usually result in disk / data corruption. So please do not do it. You have two choices (a) Take down system to single user mode and unmout system (b) Boot from the installation CD into rescue mode – Ashish Karpe Jan 30 '17 at 10:17

2 Answers2

6

LVs in data VG are using "inherited" policy. The VG's policy is cling, which attempts to allocate new extents from the same PV. See lvm(8) for details.

To override you can either run the lvextend command with additional --alloc normal option, or to change the default run vgchange --alloc normal data.

Martian
  • 1,100
  • 8
  • 8
  • 2
    yeap, you are right! That little attribute letter (l) on the vgs command was the key. I got a bit confused because i expected that for cycling policy the letter "c" would be used but that letter is reserved for "(c)ontiguous. I used the "--alloc anywhere" and worked as well. Thanks! – giomanda Jan 30 '17 at 11:28
  • 2
    WARNING: Using anywhere could be dangerous, especially if you have raid volumes - you could end up with multiple legs on same disk – Martian Feb 12 '18 at 16:28
6

I got the same error message when I tried to extend one of my LVs. The policy was already set to "normal". In my case the problem was with striping. I had VG build on top of two PVs and the LV was striped across those two PVs. When I added one more PV I couldn't extend the LV anymore as it didn't have two free PVs to maintain the striping.

Available options for striped setup:

  1. always add PVs in groups to match striping (i.e. add two new PVs if you have two-stripe LV).
  2. (if above is not possible) extend LV with disabled striping (lvextend -L+1G data -i1). This will create another segment with #Str=1 as visible in lvs --segments output. Note that there will probably be some performance difference when accessing data depending on the segment where it is located.
skazi
  • 196
  • 1
  • 3