4

I guess I've deleted LVM partition. I expanded the datastore in vmware ( 70 GB to 90 GB ), then I run echo 1> /sys/class/block/sdb/device/rescan.

After that I run fdisk /dev/sdb. it shows my sdb is 90 GB but with this warning : The old LVM2_member signature will be removed by a write command. I did enter w which I guess was a bad idea. Now none of these commands show anything: lvs, vgs, pvs

With blkid I see my sdb UUID has changed:

/dev/sda1: UUID="b96e5429-d28e-4102-9085-4f303642a26e" TYPE="ext4" PARTUUID="0ab90198-01"
/dev/mapper/vg00-vol_db: UUID="4ed1927e-620a-4bf9-b656-c208f31e6ea3" TYPE="ext4"
/dev/sdb: PTUUID="d6c28699" PTTYPE="dos"

I run vgcfgrestore vg00 --test -f vg00_00001-2029869851.vg which is the last file before today's changes. (it's for 2 months ago when I created the LVM) But it returned

  TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated.
  Couldn't find device with uuid 4deOKh-FeJz-8JqG-SAyX-KviL-UGu4-PtJ138.
  Cannot restore Volume Group vg00 with 1 PVs marked as missing.
  Restore failed.

How can I restore this mess ? Thanx a lot

2 Answers2

1

I know it's an old topic but I think it would be useful for others as I just made the very same mistake but I managed to make it work (quite) easely. Download gparted cd, boot on it. Open a terminal and

sudo hexedit /dev/sda3

(where sda3 is your broken partition) scroll with arrows to 00000200

There you should have lines looking like these:

00000200  4c 41 42 45 4c 4f 4e 45  01 00 00 00 00 00 00 00  |LABELONE........|
00000210  fe b5 f2 9a 20 00 00 00  00 00 00 00 00 00 00 00  |.... ...........|
00000220  42 32 42 4e 66 78 44 54  41 79 55 67 57 65 77 41  |B2BNfxDTAyUgWewA|
00000230  70 4e 32 42 57 4e 4f 64  52 36 6e 74 55 6f 44 4d  |pN2BWNOdR6ntUoDM|

Replace the line 210 with this code:

00000210  fe b5 f2 9a 20 00 00 00  4c 56 4d 32 20 30 30 31  |.... ...LVM2 001|

press F2 to save, CTRL + X to exit and reboot. Voila !

0

Start logging your terminal session to have a record of what you did.

Start your backup restore procedures in case data was lost. Request tapes or archive storage. Do not start copying data yet.

Consider imaging the broken PV's block device, to protect against a repair making things unrecoverable. Especially if it is easy to do, like with a storage array snapshot.

Follow a procedure to restore metadata on an LVM physical volume. Fill in the commands with the broken PV, being aware that an incorrect value may lose data. In particular, double check if the block device is the entire disk /dev/sdb, or is some partition like /dev/sdb1. You didn't print the partition table, so I cannot be sure.

 pvcreate --uuid 4deOKh-FeJz-8JqG-SAyX-KviL-UGu4-PtJ138 --restorefile vg00_00001-2029869851.vg blockdevice
 vgcfgrestore vg00
 lvchange --activate y 
 fsck

Spot check data is there to your satisfaction.


In some form of after action review, examine how data protection could be improved.

Check if backups met the recovery time objective. Say you have a policy to be able to restore to last night. If a nightly backup was not there, or not documented, fix that.

Consider creating future LVM PVs on entire disks. Avoids the partitioning complication, skipping fdisk steps. Linux LVM is perfectly fine with a PV on /dev/sdb.

John Mahowald
  • 32,050
  • 2
  • 19
  • 34