2

I believe we trashed some LVM thinpool metadata while moving 2 volume groups to a different machine. We now trying to reactivate the volume groups on the original machine. The physical volumes, volume groups and logical volumes all appear intact, but I can't activate either volume group.

Here is the error output when activating either volume group:

[root@erbium ~]# vgchange -ay vg_sfim
  Thin pool transaction_id is 649, while expected 647.
  0 logical volume(s) in volume group "vg_sfim" now active

[root@erbium ~]# vgchange -ay vg_fmrif
  Check of pool vg_fmrif/thinpool failed (status:1). Manual repair required!
  0 logical volume(s) in volume group "vg_fmrif" now active

Adding the verbose and/or ignoreactivationskip flags do not provide more useful information.

Here is the output of pvs:

[root@erbium archive]# pvs
  PV         VG        Fmt  Attr PSize  PFree
  /dev/sda2  vg_erbium lvm2 a--  59.51g    0 
  /dev/sdb   vg_sfim   lvm2 a--  29.80t    0 
  /dev/sdc   vg_sfim   lvm2 a--   6.58t    0 
  /dev/sdd   vg_fmrif  lvm2 a--  36.39t    0

Here is the output of vgs:

[root@erbium archive]# vgs
  VG        #PV #LV #SN Attr   VSize  VFree
  vg_erbium   1   3   0 wz--n- 59.51g    0 
  vg_fmrif    1  14   0 wz--n- 36.39t    0 
  vg_sfim     2   3   0 wz--n- 36.39t    0 

Here is the output of lvs:

[root@erbium archive]# lvs
  LV                        VG        Attr       LSize  Pool          Origin   Data%  Meta%  Move Log Cpy%Sync Convert
  lv_home                   vg_erbium -wi-ao----  3.19g                                                               
  lv_root                   vg_erbium -wi-ao---- 32.68g                                                               
  lv_swap                   vg_erbium -wi-ao---- 23.64g                                                               
  lv_fmrif                  vg_fmrif  Vwi---tz-- 20.00t thinpool                                                      
  lv_fmrif_2014_11_20_23_00 vg_fmrif  Vwi---tz-k 20.00t thinpool      lv_fmrif                                        
  lv_fmrif_2014_11_21_23_00 vg_fmrif  Vwi---tz-k 20.00t thinpool      lv_fmrif                                        
  lv_fmrif_2014_11_22_23_00 vg_fmrif  Vwi---tz-k 20.00t thinpool      lv_fmrif                                        
  lv_fmrif_2014_11_23_23_00 vg_fmrif  Vwi---tz-k 20.00t thinpool      lv_fmrif                                        
  lv_fmrif_2014_11_24_23_00 vg_fmrif  Vwi---tz-k 20.00t thinpool      lv_fmrif                                        
  lv_fmrif_2014_11_25_23_00 vg_fmrif  Vwi---tz-k 20.00t thinpool      lv_fmrif                                        
  lv_users                  vg_fmrif  Vwi---tz--  1.00t thinpool                                                      
  lv_users_2014_11_21_23_00 vg_fmrif  Vwi---tz-k  1.00t thinpool      lv_users                                        
  lv_users_2014_11_22_23_00 vg_fmrif  Vwi---tz-k  1.00t thinpool      lv_users                                        
  lv_users_2014_11_23_23_00 vg_fmrif  Vwi---tz-k  1.00t thinpool      lv_users                                        
  lv_users_2014_11_24_23_00 vg_fmrif  Vwi---tz-k  1.00t thinpool      lv_users                                        
  lv_users_2014_11_25_23_00 vg_fmrif  Vwi---tz-k  1.00t thinpool      lv_users                                        
  thinpool                  vg_fmrif  twi---tz-- 36.39t                                                               
  lv_sfim                   vg_sfim   Vwi---tz-k 35.00t sfim_thinpool                                                 
  lv_sfim_2014_11_23_23_00  vg_sfim   Vwi---tz-k 35.00t sfim_thinpool                                                 
  sfim_thinpool             vg_sfim   twi---tz-- 36.39t

We have backups of what I believe is metadata in /etc/lvm/archive through the last month. I'm hoping we can restore the metadata from the archive directory. One issue with this idea is that I can't simply restore it using vgcfgrestore because we have thinly-provisioned snapshots. I need to use the --force option, which is "Necessary to restore metadata with thin pool volumes", but also comes with a big WARNING: Use with extreme caution... message.

Has anyone seen an issue or error message like this and/or have any advice?

joe
  • 121
  • 4

0 Answers0