1

I have the following problem: We have a dedicated (bare metal) hardware server (Debian 10) on which we have no direct physical access. Now I want to transfer all data and applications that are on this server to a VM and run it on a KVM host.

Why do I not install the application directly in the VM? The installation of this application (Perl stuff with Apache web server, running on the same server for about 10 years) is so complex that you would rather break something. So nobody dares to do it. But now we have to move and for this reason we need some kind of smart workaround.

I thought about switching off all Perl and Apache services and transferring the hard disk via dd over the network - but the problem is that the target KVM host has less space than the sda from the bare metal server is big (in the end it uses less space than is available, sda is just oversized).

The second option would be to install the same packages on the KVM with exactly the same version numbers (according to dpkg --list), disable all services on the bare metal server (to keep the data consistent) and put /etc, /var/, /usr and everything else that is important from the bare metal server into a tarball and simply unpack it on the KVM. Of course I could also do this via rsync, but the principle is more or less the same.

What do you think about the last idea?

Do you have any other ideas?

How would you proceed with such a task?

manifestor
  • 6,079
  • 7
  • 27
  • 39
  • 1
    I'd go using `rsync` through `ssh`. Another alternative would be to do a file-system backup-restore, if you have a backup application. – Krackout Dec 02 '20 at 14:02
  • @Krackout - thanks, can you recommend some file-system backup-restore tool? – manifestor Dec 02 '20 at 14:03
  • I've used full blown backup solutions in enterprises like Symantec/Veritas NetBackup, BackupExec, IBM TSM/Spectrum, Dell/EMC NetWorker, HP Data Protector. They can accomplish this task easily, but I suppose you need such an app to be already there; it's an overkill to buy and install such an app for just one machine. – Krackout Dec 02 '20 at 14:14
  • Why not use virt-p2v? – Michael Hampton Dec 02 '20 at 15:30
  • 1
    @anx - as you asked me to do it, I answered my own question :) – manifestor Dec 16 '20 at 13:27

2 Answers2

1

(This answer discusses the block-device-level alternative, it is most appropriate if you want to keep partitioning & boot-manager configuration intact while migrating to virtual)

The problem you describe does not necessarily occur in practice. I accidentally avoided this recently:

but the problem is that the target KVM host has less space than the sda from the bare metal server

It is perfectly valid to create, loop-mount and edit partitions in a sparse file that is (even substantially) larger than the host file system - as long as you do not do anything actually writing data to the skipped areas in the file.

What I did was roughly fstrim / && systemctl stop appserver && mount -o remount,ro / && sync && dd bs=64k if=/dev/nvme0n1 | ssh vmhost dd bs=64k conv=sparse of=... Then on the vm host with losetup I made the image available to fdisk and resize2fs to change its (virtual) size. Since the image was already containing only zeros in most of its end, my operations on it did not increase its actual size by much.

The worst-case space requirement for this most simple approach would be 3 times the data content of the old server. Once to copy the sparse image, once to resize (i.e. to move all its contents to the beginning without punching new holes in its end), and then once more to convert the raw disk image to the format (or to move the then-file-based image to its own logical volume) used by the virtual machine management.

Some things to note:

  • the manner in which you do this operation should depend on the software / setup of the planned virtualization (e.g. if your current system boots via EFI, but your virtualization is not preferring that, what is the point of doing a disk copy when you are going to have to redo boot loader related stuff anyway?)
  • fstrim (especially when combined with bad SSDs or RAID) can be data-loss level dangerous. If you are not confident it is a safe operation on your setup, do the writing out a huge file only containing null bytes alternative - we only care about unused areas being detectable (being zeroed out), not whether the disk is aware.
  • Just putting the root readonly gets the job done, but gets the result as if the server crashed before being restarted on the virtual machine. If you copy the image after stopping your applications, this might be good enough: after all you expect that your server can continue from a crash already.
anx
  • 8,963
  • 5
  • 24
  • 48
1

As anx asked me to do it, I'll answer my own question :) My solution was not so sophisticated ans anx's, but I'd like to share it with you anyway.

What I did actually is, I first checked the Inode and Block sizes on both filesystems in order to make sure they're identical to each other (using tune2fs).

Then I turned off all services, except the existential ones like SSHd.

After I did that I decided to use apt-clone and not to copy the packages and binaries from one machine to another:

# on the physical machine:
apt-get install apt-clone
apt-clone clone packages
# on the virtual machine:
apt-get install apt-clone
apt-clone clone packages.apt-clone.tar.gz
# check on the VM:
vimdiff <(dpkg --list) physical_mchine_packages.txt

Next, I synced the data using rsync. The directories I synced:

  • /root
  • /etc (I excluded files like hostname, fstab, /default/grub, /network/interfaces and a buch of other directories related to kernel/initram/lvm)
  • /usr (not everythink, depends on the software you use)
  • /var (not everythink, depends on the software you use)

The final step was to check if the old physical machine's hostname or IP address is put in some configuration files:

find . ! \( -path "*proc*" -o -path "*sys*" -o -path "*var/mail*" -o -path "*var/spool/mqueue*" -o -path "*var/log*" \) -type f -exec grep -iH -- "x.x.x.x" {} \;

That's all. Everythink works in the VM now. I hope I could help anyone :)

manifestor
  • 6,079
  • 7
  • 27
  • 39