4

I used a whole device as LVM physical partition, just so

sudo pvcreate /dev/xvdg

Unfortunately, while this was in use, I then accidentally overwrote some data (I think), by writing a new partition table:

sudo fdisk /dev/xvdg, add new partition, write partition table, delete partition, write empty partition table

This is where I am currently at. Everything still looks to be working, but I am afraid of restart, unmount, etc...

  • Is it broken?
  • If yes, what is the best way to fix it?

Thanks!

Cookie
  • 191
  • 2
  • 11
  • I would like you to take at the 2nd answer where I discuss parted magic distro for solving this problem (aka a linux live distribution) and tell me what in it deserves a downvote? – ArrowInTree Dec 26 '12 at 00:59
  • @ArrowInTree: I didn't downvote it, but in my position where I have a live system that might not be able to recover from a reboot but is still working booting into something else is probably the last thing I would want to do - a backup would be more appropriate first, but in that case the pvcreate and vgreduce would be more effective and much less messy. Secondly if nothing is wrong, no action would need to be taken. In any case it seems a strange solution - if you wanted `Parted Magic`, why not `apt-get install` it? – Cookie Dec 26 '12 at 14:26
  • I suggested the live cdrom, which can inspect a broken. Particularly, "Ultimate boot cdrom" which includes Part Magic....which you boot from a menu... It would have been helpful if you had said whether or not this was a data or os partition. apt-get is the *LAST* thing I would have suggested. – ArrowInTree Dec 26 '12 at 23:45
  • I just remembered something..the rest of you forgot: *debugfs* http://linux.die.net/man/8/debugfs :_debugfs [ -Vwci ] [ -b blocksize ] [ -s superblock ] [ -f cmd_file ] [ -R request ] [ -d data_source_device ] [ device ] _-w is for rw opens. -c for catastrophic mode. This is what I sort of had in mind with *dd* before people started dv'ing for fun: http://serverfault.com/questions/219234/lvm-dd-lvm – ArrowInTree Dec 27 '12 at 02:21

2 Answers2

1

Assuming you were using the whole disk as the lvm pv, rather than an individual partition within it, it should generally be just fine since the LVM header is not in the first sector, where the partition table is, especially when using 512-byte sectors.

The partition table is in the first sector: See for example here: Hard disks can be divided into one or more logical disks called partitions. This division is recorded in the partition table, found in sector 0 of the disk.

The LVM header is by default in the second sector: See for example here: By default, the LVM label is placed in the second 512-byte sector. You can overwrite this default by placing the label on any of the first 4 sectors. This allows LVM volumes to co-exist with other users of these sectors, if necessary.

Beware: I am unsure what happens if the sector size fdisk uses is larger, say 1024-bytes - LVM might still be in the second 512-bytes sector, and fdisk might overwrite the whole 1024-byte sector?

As an aside: If you are unsure and have access to additional space (e.g. on Amazon EC2), you could always create a volume of identical size, do a pvcreate on it, add it to the volumegroup, use a pvmove to move the data to the new volume, and then a vgreduce to remove the affected volume.

Cookie
  • 191
  • 2
  • 11
psusi
  • 3,347
  • 1
  • 17
  • 9
  • Retaliatory dv without reason is bad M'kay? – psusi Dec 26 '12 at 03:23
  • what part of *FDISK* is unclear? dd from _partition magic_ would at least allow the poster to get some of the data back... – ArrowInTree Dec 26 '12 at 03:27
  • @ArrowInTree, what part is unclear to you? `fdisk` writes to the first sector of the drive only. Since that is not where the LVM header is, there is no harm. – psusi Dec 26 '12 at 03:37
  • He did this while, it was mounted. So under a flag of caution, why not *INSPECT* the disk via a live session like Partition Magic... why not give the original poster some idea what to look for...? – ArrowInTree Dec 26 '12 at 03:47
  • @ArrowInTree, there's no need to inspect anything if nothing is wrong. Either the system comes back up, or then you can start to try to recover. The question was firstly, is there anything wrong. The answer is no. – psusi Dec 26 '12 at 03:52
  • Damn the consequences... who/what is "M'kay" ? – ArrowInTree Dec 26 '12 at 03:54
  • It is how Mr. Mackey on Southpark says Okay. – psusi Dec 26 '12 at 03:56
  • @Cookie, good point... it appears LVM does (incorrectly) still put its label at an offset of 512b even though you are using a larger sector size. – psusi Dec 26 '12 at 14:57
0

Yeah, in 99.99% cases, it is broken. Reason being you have overwritten the partition table. The metadata of lvm resides in the second 512 byte sector of the PV. So, during the new partition creation, if you have touched those sectors then your metadata has been wiped out. Essentially a restart, umount will screw things up.

There are two possible (yet, might not be feasible) to hacks.

1) If you know the exact partition table of the last known good filesystem, you can run fdisk and try to create it in the same exact order. You have to know in which sectors the old fs used to start and end. Create the partition as before and it might work out.

2) If things don't work out this way then there is another workaround of pvcreate. Your last known lvm backup will be stored in /etc/lvm/archive/volume_group_name_XXXX.vg file. You need to get the UUID of the PV from there. Then, if things are your favour you can do this.

pvcreate --uuid <put_uuid_here> /etc/lvm/archive/volume_group_name_XXXX.vg <physical-volume name>

But if you can, please backup your data first. pvcreate doesn't touch user data, it only deals with metadata but if at boot time, fsck finds any inconsistency it can throw you out with fs errors and potentially un-recoverable disk.

Soham Chakraborty
  • 3,584
  • 17
  • 24