2

I understand that we can install Hyper-V on one server and run a number of Virtual Machines on it, upto the limit of resources of that server. I want to know if it is possible to install Hyper-V lumped on two or more servers, so that the Virtual Machines can use the underlying resources pool of both the servers? And also if that same is possible for an “n” number of servers, instead of just 2 servers.

user9517
  • 115,471
  • 20
  • 215
  • 297
user67905
  • 21
  • 1
  • 2

3 Answers3

1

It sounds like you want this:

  • Hyper-V Host A: 4 CPU sockets, 64GB RAM running Guest 1
  • Hyper-V Host B: 4 CPU sockets, 64GB RAM running Guest 2
  • Some Windows application sees 8 CPU sockets, 128GB RAM

That's not doable out of the box, but there are some applications that can communicate with each other across the network and break up work across the various nodes. For example, Memcached and Windows AppFabric Velocity are caching tools that can scale up as you add more nodes by communicating with each other.

This is an application problem, though, not a Hyper-V issue. The problem is the same whether you're using physical servers or virtual ones. What's the business problem you're looking to solve, and we can talk about applications that do this kind of scaling?

Brent Ozar
  • 4,425
  • 18
  • 21
0

Ah. No. Hyper-V is like a pizza - you can make slices (VM#s) from a pizza, but you can not make ONE pizza from many slices from diferent ones.

You can CLUSTER up to 16 servers (shared AN storage needed) and move virtual machines between them - which is nice for maintenance etc., but every VM has to run on one machine and take all ressources from it.

That said, you CAN do something like that, but not with Hyper-V. YOu dont WANT though, as it will cost you a LOT. There is ONE provider that has a technology for what you ask, forming a VM spanning multiple servers. Not that you could pay for it (it has VERY high end hardware requirements).

TomTom
  • 51,649
  • 7
  • 54
  • 136
  • 3
    Tom, maybe you could just tell the name of this provider instead of deciding for the asker that he can't afford it and doesn't want it? – Sven Jan 23 '11 at 16:12
  • @TomTom - what are you referring to in your last paragraph? I'm not aware of a technology for that for general purposes - as opposed to specific workloads, like distributed computing for parallel problems, etc. – mfinni Jan 23 '11 at 16:15
  • @TomTom : Why would it require high end hardware? I will be getting about 7 blades with 2 x X5650 and 48GB Ram. Connected to a 10TB SAN. – user67905 Jan 23 '11 at 16:22
  • Now, the thing is that I do not want ONE VM to span multiple servers. Rather I want multiple VMs to span multiple servers, without specifying that the particular VM should run on a specific host. – user67905 Jan 23 '11 at 16:23
  • Itd be nice to see some info on the solution that cost a "LOT". – ErnieTheGeek Jan 23 '11 at 18:16
  • Google is your friend. Just as a note - "48gb blades with 1tb san" would be too slow. Make that "multiple 128gb servers maxed out with at least a 4xInfiniband fabrib between them for syncing the memory, better make that two of three channels" and you start talking. I have to see whether I can find them again - their name popped up once on a similar question (which comes up regularly) and I was surprised myself. – TomTom Jan 23 '11 at 22:55
  • @TomTom - IBM makes an Intel server chassis series (iDataPlex) that can be scaled out as a single system image by just buying more of them with the proper interlink, is that what you're referring to? – mfinni Jan 24 '11 at 14:15
0

You said (in comments) "Now, the thing is that I do not want ONE VM to span multiple servers. Rather I want multiple VMs to span multiple servers, without specifying that the particular VM should run on a specific host. – user67905"

That's fine, so you are really just talking about a standard Hyper V cluster, which is no problem at all. To run a Hyper V cluster you either need the free Hyper V server (despite what some think, this has clustering ability) or Windows 2008 (+-R2) in either Enterprise or Datacentre edition. Since you are running VMs you will want at least the Enterprise version anyway, since that additionally licenses 4 VMs.

When you install, just active the 'Failover cluster services' feature (or is it role, can't remember) on each of the hosts. Then open the namesake cluster console in MMC, and run the failover cluster validation wizard. This is a really useful diagnostic that will tell you whether you've got your kit set up correctly. The area that usually causes an error is storage. When you're done, and you've run the validation tool after each tweak and got a clean bill of health, then enjoy your cluster. You'll be able to move VMs from one to the other, pool storage, and basically do what you need. I can't remember the max allowed of hosts per cluster but it's something like 12. By the way, as others have opined, there is probably no point pooling memory between hosts, it's likely to be an expensive pain. It's an application thing. And unless you have I'd say more than 30 VMs running on more than 4 hosts, I wouldn't bother with SCVMM or anything like that. You'll be able spread the VMs out manually, and with the new dynamic memory features that have rolled out with 2008 R2 SP1, you probably won't run into any memory constraints unless you are planning that all of your machines are going to be hammered all of the time.

Mark Lawrence
  • 833
  • 5
  • 7