5

We currently test a very large number of devices and we have one PC per 10 devices or so (we use two USB hubs per PC to break out to the devices). We have huge number of PCs and it's turning into a management nightmare. I've been thinking about how we might be able to virtualize our infrastructure but I'm running into a problem of how to get lots of USB connectivity from the VM to a USB hub in another room to which the devices are connected.

I was thinking that we could set up a server room to host the VM cluster and that would replace the PCs, but I need a way to get 2 USB connections per VM available in the other rooms where all the devices are located. I was hoping people would have some suggestions (technologies/communication protocols/etc) on how I could do that because I've only found network-attached-USB, which seems doable, but for the one vendor I found (digi.com), it only supports USB2. Plus, digi is the only vendor that offers that technology so that makes me really nervous when we're building out on the scale we're thinking of. I was hoping to find alternate means of connectivity (PCIe over ethernet?) that would allow me to do this more effectively or at least have more broad support. Plus, I'd like to be able to support USB3 as well.

The best I'm hoping for is virtualized USB ports on the VMs presented as physical USB ports in the other room - that's what I really want. I also thought about having a large chassis with a bunch of USB cards, going from the cards to USB-over-fiber adapters, but that introduces a lot of connections. Other thoughts are how to move an external PCI chassis into the room with the devices (PCIe over ethernet?) with a lot of USB cards installed and break it out from there? I'm open for suggestions!

I'm also open to putting a VM cluster in the room where the devices are housed if that's really the only way to get the I/O from the server rack to the devices, but I'd really prefer that the cluster live in a server room and then I/O branches out from there.

Just to clarify, I'd like to have physical USB ports in one room and virtual USB ports show up on the VMs. How I get from one place to the other is completely open to suggestion.

Zac67
  • 10,320
  • 2
  • 12
  • 32
Mike
  • 379
  • 2
  • 3
  • 14
  • This is off-topic because it's a shopping question, i.e. what can I buy to do what I need... But it's worth remembering that Digi has been around for a long, long, long time, so if they're the only vendor you've found, at least they've got a reliable history. – Ward - Trying Codidact Nov 15 '18 at 06:24
  • Ward, thanks for the input! I also updated my question so hopefully it's more appropriately worded. – Mike Nov 15 '18 at 16:12
  • 1
    You can use USB/RJ45 product, you can google to find some model – yagmoth555 Nov 15 '18 at 16:14
  • I've looked at those - so I guess at the server frame I could install a bunch of USB cards and then use extenders. That's a lot of interconnects, but it's worth testing. – Mike Nov 15 '18 at 16:26
  • 1
    Why do you want to attach USB devices to remote VM's? This sounds like an XY problem. http://xyproblem.info/ – longneck Nov 15 '18 at 16:32
  • @longneck I provide some more clarification about the problem I'm trying to solve. Please let me know if you need any more information. Thanks! – Mike Nov 15 '18 at 17:59
  • I think you'll find that buying smaller PC's will be cheaper than finding a USB extension product/device. We use HP EliteDesk Mini's, and they are about the size of a paperback novel. Depending on the performance requirements of your testing, you might even be able to use a Raspberry Pi or similar instead. – longneck Nov 16 '18 at 13:34
  • We currently use NUCs (450+) and it's becoming a management nightmare. We really want to virtualize our infrastructure (for lots of reasons), but we need to find a way to get USB IO from a server rack to the labs where the devices are located. – Mike Nov 16 '18 at 17:14
  • I'm still of the opinion that you're just trading one management nightmare for another. What you are looking for is not common, and therefore probably not well supported. I would concentrate on reducing the management overhead of your NUC's. What kind of challenges are you facing there? – longneck Nov 19 '18 at 21:43
  • Assembly and wiring up all these NUCs is time-consuming and a pain, deploying them is time-consuming, migration is a pain on failed drives, hardware/drive failure is an issue, there is no redundancy, power backup is more challening, wasted processor utilization as many NUCs are sitting idle, bandwidth is limited to the NUCs where it could be more efficiently handled at a cluster, hardware will have to be migrated out after a number of years which is going to involve a huge amount of overhead, and the number of NUCs we're using is only growing. – Mike Nov 20 '18 at 01:22
  • I actually think your being a bit naive here, theres a specification limit of 128 devices on a USB hub, but the reality seems to be (from the little googling I've done) that most USB controllers are really only capable of pushing about 40 or 50 devices max. Also I've got to admit to thinking this doesn't feel like a problem you should be virtualizing (I fail to see a the benefit). – Matthew Ife Nov 23 '18 at 18:21
  • We are only running ~10 devices per NUC, so we're looking to have ~10 devices per VM. – Mike Nov 23 '18 at 21:02

2 Answers2

3

You could choose using small form factor Linux based computers, USB hubs and a software solution to provide access to the USB devices.

Let me describe a possible solution, and you can tailor it to your needs:

Mounting a small form factor computer (like a Raspberry Pi) is discussed in many forms on the internet. You can choose you preferred way of mounting for example: mounting them to a off the shelf 19" rack mount shelf and drill some holes for spacers, or make a stack of them with spacers. This would make a reasonable way to set up and scale up to a lot of hosts.

The power of a small form factor computer is more easy to manage. It would require only one shared power source for the hosts and possibly to supply extra power to your USB hubs. An industrial power supply and some custom wiring would create a stable power source and the possibility to add fail-over power. Many solutions are possible here.

A small form factor Linux based computer could run a very simplified (single purpose) distribution with only the USB sharing software installed. This a management agent could be installed to monitor the working of the host. To minimize the chance of corruption you should buy a proper memory card, make the filesystem of the host read-only and not configure any swap space.

Software solutions are available to share the USB devices. There are vendors offering proprietary software with support or you could go for an open source solution like the USB/IP project.

Advantages compared to your current solutions:

  • Small computers will are easier to swap out when they fail (replace hardware, leave memory card).
  • All software on all devices will be identical and easy to replicate with a simple card reader/writer.
  • The power will be more easy to manage.

The possible drawbacks are:

  • Creating a custom setup and wiring-tree for a 19" shelf of hosts and their hubs/devices.
  • The network speed could be a bottleneck.

While this is a home made product of well supported parts (industrial power, off-the-shelf, non-vendor-locked, replaceable products) it still is a custom setup and does not match your 'enterprise' solution search. I highly doubt if there would be an enterprise solution, that scales well, vendor-independent and cost-effective.

Joffrey
  • 2,021
  • 1
  • 12
  • 14
  • I'm not thrilled at having many computers in the labs, but this is closest to what I'm looking for and doesn't have too many individual technologies at play. I like the thought of read-only SFF PCs as they're closer to being appliances - that, or one larger host that handles multiple USB connections utilizing USB-sharing software, something that we're looking into now. The real challenge will be whether they can handle USB 3, but like you said that's a limitation of the network. This is closest in line with what I was looking for, so I'll mark it as the answer. Thanks! – Mike Nov 25 '18 at 18:39
3

The defacto virtualization stack on linux (kvm/qemu + libvirt) supports USB1,2,3 redirection over the network. So,

  • in the room with all your usb devices plugged to linux boxes you would run the redirection server on each box
  • in the room with linux boxes running the VMs, you would map (over the network) those remote devices to your local vms.

This can all be done programatically (libvirt) or with a GUI (virt-manager).

  • KVM is the kernel-side stuff to make virtualization possible and fast
  • qemu is the userspance program emulating various hardware
  • SPICE is the code and protocol defined for transfering keyboard, mouse, display and other intereconnection things in and out of a VM, possibly over the network
  • libvirt is the set of tools built on top of the above to glue everything together and abstract it so that you don't have to handle the details and interaction between them. It provides a nice interface (command-line or actual API to program) to create VMS, define networks, start/stop machines, etc.
  • Virtual-manager is the GUI tool using libvirt.

Here is a nice guide to set this up with 2 machines.

More doc about qemu usb redirection

knarf
  • 141
  • 7