3

I had a server running Windows 2003 that resided on a RAID5 array formed by Promise FastTrak SX4100, which I thought was hardware-based because of its dedicated processor and buffer memory and its ability to boot DOS and Windows. As it turns out, however, that controller is only hardware-assisted, with higher level logic carried out by x86 firmware running on host processor. Now that the aged motherboard (or processor?) has died, and I tried to migrate to a more modern UEFI-oriented hardware, the RAID controller cannot initialize even in legacy (BIOS) mode — it hangs when scanning disks, i. e. its firmware receives the control from UEFI/BIOS and prints welcoming messages on screen, but makes no progress in identifying connected SATA drives.

Therefore I thought I may have better luck running Windows inside a virtual machine with PCI card passed through to it, the more so that NT 5.2 is unlikely to be compatible with modern chipsets, while Qemu provides totally comfortable emulated environment in classic BIOS mode. The problem, however, is that SeaBIOS does not list the RAID controller as a bootable device, despite being able to communicate with it successfully.

That is:

  • The controller firmware does receive control and is able to initialize the RAID array before the boot menu is displayed by SeaBIOS, however that menu lacks any mention of the array disk.
  • The array configuration utility that can be invoked during POST process clearly shows that the array is healthy.
  • When Windows installer is run and loaded with RAID drivers, it also clearly displays the disk contents, proving its availability.

In other words, the array seems totally operational inside the VM environment, but for some season not recognized by SeaBIOS as a bootable device, although the later does support PCI devices for Boot ROM option, as is evident with iPXE network boot ROM built into SeaBIOS itself.

I also had an idea that GRUB may be of any help here, i. e. booting from SeaBIOS into GRUB (on a small separate disk) and then chain-loading to Windows. However I was not very successful at configuring it, since Linux environments do not see the array due to the lack of drivers and thus cannot assist with menu creation, yet GRUB itself is not too friendly nor verbose — I could not even understand whether it actually sees the array as a disk drive, or needs some drivers to be loaded beforehand, or any other prerequisites. Rescue kits like RescaTux or PartedMagic are not helpful either, since they are focused on repairing existing GRUB installations — not setting up new ones.

For reference, I experimented with Xen 4.7.2 using upstream Qemu 2.6.2 with SeaBIOS 1.9.1, on top of openSUSE 42.2 with Linux 4.4.62. Forums and mailing lists indicate that booting from PCI RAID was already possible in much older versions, for over a decade, so I assume that it is my particular setup to blame. But I cannot understand, is SeaBIOS indeed capable of booting from my RAID controller?

The ultimate goal is to get the server back by any means available, including by acquiring another compatible old hardware. But I just got curious with this specific technology, as virtual machines seemed more versatile and future-proof method of prolonging the life of legacy systems.

Anton Samsonov
  • 281
  • 1
  • 9
  • What is model of new motherboard? – Mikhail Khirgiy Jun 04 '17 at 14:36
  • @MikhailKhirgiy That is Gigabyte H110-D3 (identified as H110-D3-CF in DMI) with firmware version F1 dated 2015-11-10; there was a couple of updates released since then, but none of them mentions any issues related to PCI. What makes you think the motherboard may be the one causing trouble? – Anton Samsonov Jun 04 '17 at 15:04
  • Try: 1. Load default bios settings. 2. Set `Other PCI Device ROM Priority` to `Legacy only`. 3. Try to enter into RAID bios menu and check disks array status. – Mikhail Khirgiy Jun 04 '17 at 15:31
  • @MikhailKhirgiy That is exactly that I have already tried. *Legacy only* is the only mode that makes it possible to run real-mode ROMs; in default mode, the controller firmware does not receive control at all. However, as I said, the controller hangs during disk scan, which in turn blocks all further activity, — i. e. it is impossible to enter RAID configuration utility, impossible to enter UEFI/BIOS setup, impossible to proceed with system boot. Simply put, either the system starts in *UEFI only* and the controller is ignored, or in *Legacy only* and it hangs during initialization. – Anton Samsonov Jun 04 '17 at 15:40

2 Answers2

0

Then you have only one way:

  1. You must found old motherboard with PCI V2.2 expansion slot and try to boot from raid controller.
  2. Then install special drivers for KVM all virtual hardware (see below).
  3. Make backup. Then boot from Linux live CD (by example from SystemRescueCD) and reduce size of partitions without changing the start position of boot and root partition​ (it's usually window's disk C:) by GParted program. You must have more 8Gb+RAM free unpartitioned size on logical RAID drive. Be sure that you can boot after it.
  4. Duplicate logical disk by dd command to a file on a backup drive. Then connect disks to the new motherboard, install Linux on software RAID1

By example: you have 4 x 120Gb disks in RAID5 and one logical drive /dev/sda. You have only one partition /dev/sda1 which is Windows disk C:. It has 300Gb size after reducing by GParted. You mount another backup drive by command: mount /dev/sdb1 /mnt. Then copy first 301Gb of RAID disk to backup drive by command dd if=/dev/sda of=/mnt/disk-c.img bs=4M count=77056. When it is copied do umount /mnt.

  1. Create soft RAID5 on free space. Create LVM group on it and LVM volume with size more than the image file.
  2. Copy data from the image file to LVM volume. Attache this volume as RAW disk to virtual machine.

By example: Create logical volume by command lvcreate -L 302G -n win_disk vg0. Mount backup drive and copy data to the volume dd of=/dev/vg0/win_disk if=/mnt/disk-c.img bs=4M count=77056.

Drop your RAID controller to a recycle bin.

PS:

When I created Windows 2000 virtual machine I assigned next virtual hardware:

  • CPU - Hypervisor default
  • Disk - IDE raw
  • NIC - Device model rtl8139
  • Mouse and Keyboard - PS/2
  • Video - Cirrus

Drivers:

Realtek RTL3189C

Windows guest drivers for KVM libvirt

Old Intel Chipset Support

Win2000 Device Manager

Mikhail Khirgiy
  • 2,073
  • 11
  • 7
  • The whole point with booting from RAID5 is to avoid reinstalling Windows after changing its boot device. Your solution misses that point, as installing new disk controller drivers in advance does not tell Windows where its root partition will be moved to. – Anton Samsonov Jun 05 '17 at 05:35
  • I wrote that. I moved Windows 2000 hardware server to virtual without any problem. You need install drivers of all virtual devices before moving server to virtual machine. Also i wrote that don't change start sector of boot and root Windows partitions. – Mikhail Khirgiy Jun 05 '17 at 05:44
  • Well, my previous experience was the opposite: the ability to boot Windows is not just about having drivers for the disk controller, but also about specifying which drive and partition it resides on. It is just like specifying root= parameter for Linux kernel, only that not so easy as editing a GRUB config file. But I reserve the possibility that I may be wrong. – Anton Samsonov Jun 05 '17 at 05:51
  • @Anton Samsonov Windows 2000 on virtual machine works good, but graphics is little slowly. – Mikhail Khirgiy Jun 05 '17 at 11:38
  • By the way, regarding steps 3 and 4. How would it be possible to access the RAID volume from Linux live environments when Linux and BSD do not in fact support that controller? They do not see it even as a bunch of raw drives, not to mention as a single volume. – Anton Samsonov Jun 06 '17 at 05:18
  • Ops. There's bad situation. Then use winpe cd to boot. I saw that dd utility has Windows version. But i think installation of Windows on VM and rescue data from backup is more short way. – Mikhail Khirgiy Jun 06 '17 at 05:40
  • I found it at http://unxutils.sourceforge.net. By example `dd if=\\.\PhysicalDrive0 of=D:\win_disk.img bs=4M`. – Mikhail Khirgiy Jun 06 '17 at 07:47
0

Yes, SeaBIOS supports loading and running PCI option roms. Which apparently actually works as you can see the raid controller boot messages. The PCI rom then has to register any bootable disks, which is not happening here. Could be a configuration issue. Check the array configuration utility whenever you can configure the boot volume there. Could also be some bug or incompatibility ...

If that doesn't work out you can try something completely different: Connect the disks to some linux-supported sata controller, then check whenever dmraid is able to decode the raid volume. If that works you can attach it as simple disk to your win2k3 virtual machine.

Gerd Hoffmann
  • 309
  • 2
  • 3
  • As stated multiple times, that RAID controller is totally compatible with regular BIOS and thus is visible as boot device there. So it is SeaBIOS that may be non-compliant in the first place. As for *nix compatibility, theoretically Linux should support that controller in JBOD mode and FreeBSD should support it fully, but practically they do not at all. So I am not very optimistic about dmraid. Moreover, my question is not about getting data back, but bringing the entire server back without having to reinstall Windows. – Anton Samsonov Jun 05 '17 at 05:44
  • 1
    In theory it is totally compatible. In practice obviously not, otherwise it would have worked with the new mainboard. It's not clear who is non-compilant here. Could be seabios, but could be the raid option rom too. – Gerd Hoffmann Jun 05 '17 at 06:44