0

After a few comments and answers and thinking, I hope I can now add a

TL;DR : If I want full performance(a) and (simple) HW failure redundancy, does it make any sense to go for a virtualization solution with more than one guest per hardware-box?

(a) -> parallel C++ builds with expected (very) high CPU and disk utilisation


Let's start with saying that I'm a total noob wrt. to server virtualization. That is, I use VMs often during development, but they're simple desktop machine things for me.

Now to my problem: We have two (physical) build servers, one master, one slave running Jenkins to do daily tasks and build (Visual C++ Builds) our release packages for our software. As such these machines are critical to our company, because we do lot's releases and without a controlled environment to create them, we can't ship fixes. (And currently there's no proper backup of these machines in place, because they do not hold any data as such - it just would be a major pain to setup them again should they go bust. (But setting up backup that I'd know would work in case of HW failure would even be more pain, so we have skipped that until now.))

Therefore (and for scaling purposes) we would like to go virtual with these machines.

Outsourcing to the cloud is not an option, not at all, so we'll have to use on-premises hardware and VM hosts.

Each Build-Server (master or slave) is a fully configured (installs, licenses, shares in case of the master, ...) Windows Server box. I would now ideally like to just convert the (two) existing physical nodes to VM images and run them. Later add more VM slave instances as clones of the existing ones.

And here begin my questions:

  • Should I go for one VM per one hardware-box or should I go for something where a single hardware runs multiple VMs?
    • That would mean a single point of failure hardware wise and doesn't seem like a good idea ... or??
    • Since we're doing C++ compilation with Visual Studio, I assume that during a build the hardware (processor cores + disk) will be fully utilized, so going with more than one build-node per hardware doesn't seem to make much sense??
  • Wrt. to hardware options, does it make any difference which VM software we use (VMWare, MS, Virtualbox, ... ?) (We're using Windows exclusively for our builds.)

As stated above, I'm starting to think it doesn't make sense to go for more than one guest per hardware-box performance wise. Are there other considerations?


Regarding budget: We have a normal small company (20 developers) budget for this. ;-) That is, if it's going to cost a few k$ it's going to cost. If it's free - the better. I strongly prefer solutions where there's no multi-k$ maintenance costs per year.

Martin
  • 589
  • 4
  • 10
  • 27

2 Answers2

1

I can only answer last part,

IF you are going to use [ONLY] windows for virtualized guests , then HyperV will be best VM Option available for you due to high performance virtualization for windows OS.

Same applies to XEN for Linux OS virtualization.

PayamB
  • 11
  • 1
  • Adding. DISCS are critical. Build server notoriously use a lof ot IO. SSD is the ONLY sensible option. not for all VM's, just make sur ethe build agents run against an SSD based storage. – TomTom Mar 20 '12 at 16:09
  • TomTom - two points. 1) Please start spell-checking your comments. 2) IO is not always the bottleneck for all build processes; without benchmarking, you could just be throwing money away. http://www.joelonsoftware.com/items/2009/03/27.html – mfinni Mar 20 '12 at 16:46
  • @mfinni - Joel wrote (2009) in that post that their "compiler is single threaded", and hence he suspected them to be CPU bound. That isn't even exactly true for VC++8 (VS 2005) as while the compiler *is* single threaded, VS can and will parallelize project builds, so I expect *for us* that we'll rather be I/O bound to a large extent. But, yes, testing is always a good idea before throwing out money! :-) – Martin Mar 21 '12 at 05:52
  • @mfini - get real. First, I spell check when I want. Want me to spell check - that is 150 USD per started hour. When you pay? Second, simply said, of all the about 200 build agents I know in about a dozen companies all of them are IO bound and oerload a single disc. Even if you run a single threaded compiler - you will run multiple agents in parallel pretty soon, so you hit random load on the disc which makes the disc slow. Game over, per definition. You can ignore reality, neither reality nor me care. – TomTom Mar 21 '12 at 06:28
  • While SSDs are *nice* to have, it's all down to a matter of cost/benefit. – tombull89 Mar 21 '12 at 09:17
  • 2
    Tomtom - as I've said before, improving your spelling/grammar will help sell your good ideas. I like reading your answers, but the spelling mistakes make me wince. No, I'm not going to start paying you to improve your presentation :-) – mfinni Mar 21 '12 at 14:05
  • Sadly build server side SSD is a cheap thing. I start realpcing all OS discs with SSD these days as we replace them for eenw machines. Build servers got REALLY heavy benefits. Sometimes a cut of 50%, especialyl when 4-6 agents hit the same disc. – TomTom Mar 21 '12 at 14:37
-2

I'm not familiar with Hyper-V but looks like it has licensing costs. The next version of Proxmox is going to have High Availability so if you converted your two existing hosts into Proxmox hosts, invest in a little NAS/SAN (e.g. Synology) you can have a pretty decent setup for modest hardware cost (Proxmox is open source). (But note this setup doesn't include backup either.)

Note that you'll want to use the virtio-win drivers for your Windows guests.

Hope this helps.

HTTP500
  • 4,833
  • 4
  • 23
  • 31
  • "Hope this helps." - well, **no**, it doesn't answer the question, specifically it does not appear to address my hardware doubts. – Martin Mar 20 '12 at 18:01
  • @Martin, What are the specs of your existing servers? I have two servers that are Proxmox hosts with the following specs: – HTTP500 Mar 20 '12 at 18:05
  • @Martin I have two servers that are Proxmox hosts with the following specs: Intel S3420GPLC MB, Intel Xeon X3430 Quad Core CPU (VT-x & VT-d), 12 GB RAM, 4 X 500 GB WD Caviar Blue 7200 RPM SATA drives (RAID 1+0), Adaptec 5405Z RAID Controller (supports write-back cache without battery via Zero Maintenance Module), Adaptec ACK-I-mSASx4-4SATAx1-SB-0.5m R cable (mini-SAS to SATA). Each host supports ~ 10 guests with an average allotment of 1GB RAM each. Most hosts are Linux, you'd want to allocate more RAM for Windows. The guests are developer sandboxes, etc. – HTTP500 Mar 20 '12 at 18:11
  • "What are the specs of your existing servers?" - I do not have the details ready, but remember that the existing two servers are physical machines atm., so they have RAM and HD accordingly. Both have 4GB RAM and a Xeon CPU (I think the one has 2 and the other has 4 cores). I do think it wouldn't make any sense to run more than one build guest on one of these, but the question was rather what to do about the next hardware purchase ... I guess I'll have to rephrase that more cleary somehow ... – Martin Mar 21 '12 at 05:20
  • my parents teached me - if youhave no clue what you talk about, shut up. "I am nto familiar with hyper-v but blablabla" - bad news: Hyper-V server is free. No licensing costs. Run windows on a VM - licensing costs, but yo have to hav a valid windows license EVEN IF YOU HOST ON OPEN SOURCE, so that is a non issue. – TomTom Mar 21 '12 at 06:29
  • 1
    @TomTom I deserved the calling out on the licensing but did you need to be a prick about it? – HTTP500 Mar 21 '12 at 14:35