5

So, I have gotten myself into a particular predicament and have seen a few pieces of ways to solve my problem but I am missing some parts in between.

The basis of my problem is that I have an old ESXi managed server with a few critical debian VMs and a new ESXi server that I want to host these VMs. The servers are in separate data centers and while the actual used size of the VMs is only a few Gbs directly they are each setup as encrypted LVMs so ESXi sees them as fully filled 3tb drives. Ideally I would like to create a copy of non-critical parts of these VMs and then at some point in time announce downtime and freeze them and transfer the critical parts. If the disc were not encrypted I could just shrink the drives but my understanding is that to shrink them I would need to shutdown the servers which is less than ideal. As such this the paths I see I could take.

  1. Manually transfer each 3tb VMDK file (Extremely slow)
  2. Have downtime and resize to make the transfer nicer (downtime is not ideal)
  3. Use some combination of DD, sfdisk, LVM tools, and dump to transfer stuff over to new VMs

I would love to use 3 but I honestly am unsure exactly how I would do this or the best way of doing this that would preserve the LVMs and encrypted setup.

Whinis
  • 61
  • 3

3 Answers3

3

Due to encryption you cannot perform migration of the "useful" part of the disk only with tools that looks at the VM from the "outside". This includes vMotion, Veeam B&R, and such.

The only thing that come to my mind is the migration performed with the free VMware converter: this allows you to perform a "P2V" live migration by looking at the VM from the "inside".

Install it on a windows VM that can reach both the VM and the ESXi host, select to migrate a "powered on" linux machine, and supply the root credentials of both the VM and ESXi host. It will login into the machine and perform the migration from the "inside", seeing that the disks are a few GB full, and transferring only these. I suspect that if you select "infrastructure" the converter will try to take advantage of the fact that the VM is already in the infrastructure, and that's bad in your specific case.

Never tried this at home nor in production with an encrypted disk, but I performed a P2V live migration with 1TB disk from a physical host to an ESXi host, and the migration via 1GBe took only around 40m, while the estimated raw time to transfer a full 1TB of data over a GB link is around 3 hours, so it performed something like a file-by-file type copy.

xmas79
  • 141
  • 1
  • 3
  • So, I managed to use the standalone converter on a Windows vm in the new location to transfer. Sadly it does not appear to have properly transferred as it does have disc encryption enabled and appears to boot but incompletely as no network services come online. – Whinis Dec 04 '17 at 12:19
  • Sorry does not have encryption enabled but did copy the LVM and does try to load the encryption module at boot – Whinis Dec 04 '17 at 12:33
1

So, this was the best scenario after the P2V failed.

  1. Make a copy VM on the destination with working LVM encryption.
  2. Make a second VM and mount the encrypted LVM to it to /mnt

    Important so that the server itself is not running
    
  3. Copy keys between servers for root users to prevent access issues
  4. Run the following command

    rsync -aHxvK --numeric-ids --progress --exclude=/etc/fstab --exclude=/etc/crypttab --exclude=/etc/initramfs-tools/conf.d/* --exclude=/etc/network/* --exclude=/mnt/* --exclude=/dev/* --exclude=/proc/* --exclude=/sys/* --exclude=/tmp/* --exclude=/boot/* --exclude=/root/*   root@1.2.3.4:/* /mnt/
    

This will copy most of the non-changing files and give you a functioning "copy" of the server. Most of this rsync is shown in a few guides online but I found that /etc/crypttab is needed for encrypted volumes or it doesn't boot and initramfs is needed or you console spam on boot

Once this is done You schedule a short downtime and shut down major services like database and web servers and do a final transfer of those directories before bringing up and transferring endpoints to the new server.

Whinis
  • 61
  • 3
0

One possible solution:

Build a new, empty VM on the target host.

Build or reuse a simple helper VM.

Connect the disks of the target VM to the helper VM, build partitions and filesystems on them, and mount them. If an old (0.xx) grub version is used, use -I 128 when making your filesystems!

From the helper VM, copy as much as you can of the running system (exclude /proc and /sys !! ) via rsync into the target VM filesystems. You might need --numeric-ids, -H, and --sparse/--inplace. Use --delete thoughtfully.

Whenever you can afford, quiesce the source system for a short downtime (shut down as much services as you can, esp database servers!), and do a final rsync.

Chroot into the copy. Fix the /proc and /sys mountpoints, /dev (MAKEDEV generic will usually give you a sensible, udev-independent set), and the bootloader (easiest to do a simple grub-install - mind where your device files are pointing! - and later use manual options on first boot, then fix it properly from within running system).

Unchroot, Unmount and disconnect the disks. Boot the copy (network disconnected at first. You will likely have to tweak the bootloader options manually).

rackandboneman
  • 2,577
  • 11
  • 8