3

I am putting together a dual-xeon quad core (i.e., 8 cores total) 12GB RAM linux server to replace several old smaller servers. I would like to use virtualization both to learn about it and because the individuals who were using the old servers need to be kept separated.

I will have two 120GB SSD drives in a RAID mirror and 2 2TB SATA II drives in a RAID mirror.

I believe I will use Ubuntu 10.04 LTS with KVM as the host system and Ubuntu 10.04 for the primary resource-intensive guest VM. The three additional guest VMs will probably be Debian Lenny and are low usage and low priority.

Does the following resource allocation plan make sense or do more experienced users see pitfalls?

  1. Host System: use 24 GB off the SSD, i.e. 12GB for files + 12GB as swap
  2. Primary Guest VM: use 96 GB SSD + 1,900GB SATA (allocate 4CPUs + 8GB RAM)
  3. VM DNS Server: use 8 GB SATA (allocate 1CPU +1GB RAM)
  4. VM WebServer: use 8 GB SATA (allocate 1CPU +1GB RAM)
  5. VM Mail Server: use 8 GB SATA (allocate 1CPU +1GB RAM)
  6. Reserved for Future Use: 76GB SATA

In particular, will 12GB be enough space for the host system's files?

Will 12GB swap be adequate? Is it a bad idea to use the SSD for the swap space?

The primary guest VM is the most-used server and it needs fast disk I/O, rebuilds a roughly 30GB MySQL database frequently, needs a lot of file storage space, runs Apache and a mail server. This entire hardware purchase is wasted if this server isn't performing well.

How should I partition the disks in order to most easily tell the host system where to put the various guest VMs? That is, I want the primary VM to take advantage of the faster SSD drives for its core/OS files, and use the SATA drives for its storage, and want the less important VMs to just use a portion of the SATA drives and stay off the SSDs.

Can I allocate more RAM or CPUs to the guest VMs (overcommit) without causing problems or is that just not worth it?

Thanks for any suggestions.

brianwc
  • 31
  • 1
  • 3

7 Answers7

3

My setup is somewhat similar and works well. Virt-manager makes it really easy (even over ssh X forwarding it works well). Some random thoughts:

I would use LVM + virtio (perhaps except for the very large volumes; there appears to be a "1TB problem" with virtio) in this scenario. You can put the IO-intensive vm's volume on the fastest part of the sata raid.

Swap: unless you know exactly why you probably don't need 12GB at all.

On the small systems I would recommend splitting off the data volume from the system volume. You'll probably be using ~4 out of 8GB for system files leaving only 4GB for those "oops" moments. Systems behave a lot better when their root volume isn't full.

What kind of raid are you using? DM-softraid or some battery-backed hardware controller?

Putting the system files on a SSD will give you nice bootup times but not much after that. Putting data files (esp seek intensive stuff) on the SSD will give you intense joy for a very long time.

Afaik there is still some gain to be had if you do not fill up your SSD's all the way, leaving 20% unused (never written to) is easy with LVM, just make a volume for it.

As with any hardware rebuild I urge to use ECC memory.

Joris
  • 5,969
  • 1
  • 16
  • 13
  • +1 for pointing out that swap won't been need (at least not 12GBs worth) if planned correctly. – Coops Jun 30 '11 at 20:06
3

"It looks like you plan to concentrate a number of separate, working server machines into one massive single-point-of-failure server." I think You are wrong. KVM is very good choice for fail-over solution. Just bring XML definition file to another server and use shared storage and/or identical network card config for all severs in the cluster. Tested, worked. Remember about LACP and link aggregation - also works :)

TooMeeK
  • 51
  • 2
2

12 GB should be adequate for your system.

12GB should be more than adequate for swap. I wouldn't worry to much about swap access speed as swap is typically not used much. With your available memory you shouldn't see any significant swapping. If you want a large temp space, you may want to use a larger swap size and use tmpfs for /tmp.

You can manually place the virtual systems file systems, either as files, or partitions. They will be wherever you placed them.

You have way more RAM and CPU than appear needed. Watch the memory use on the servers and increase as needed.

I would install a munin server process on the server, and munin clients on the server and all virtual servers. This will allow you to quickly determine if you have any bottlenecks that need tending to.

I wouldn't overcommit RAM, but depending on load you should be able to overcommit CPUs. Given your increased power, this shouldn't be necessary. KVM allows you to specify max values for both of these which are higher than used at startup. I haven't tried dynamically changing these values.

BillThor
  • 27,737
  • 3
  • 37
  • 69
1

It all sounds like a test server we have already :)

Well, avoid OCZ Vertex SSD even if they can do 550MB/s read and 530MB/s write - they are probablly just too faulty, you can read that on the 'Net. But I haven't tested them myself.

For me the best option is still SAS or FC drives in a RAID10 array, even if SSD will do more IOs, but its lifetime is limited (get from SMART that data!). When a disk fails you just replace it, what's going to happen when all SSD are the same series and all will fail at once? Uh.

Yes, I can confirm storage for VM is very IO intensive. One day I turned on screen and Ubuntu Server was saying IO queue too big or something like that and it hanged for long time.

I allocate as much CPU/RAM as I need for VMs, for example if a new VM is to be deployed I reduce RAM for the rest while in maintenance, not too much but enough for new VM.

Now I'm testing bonding together with bridging exactly for KVM VMs. I successfuly set bonding mode LACP and round-robin (test says 1 packet lost when cable unplugged). Now I'm wondering is it possible to reach 2Gbit over network to KVM VM...

Next thing is to set up cluster.

Deer Hunter
  • 1,070
  • 7
  • 17
  • 25
TooMeeK
  • 11
  • 1
0

As a system admin responsible for keeping things running, this plan would give me an uneasy feeling. It looks like you plan to concentrate a number of separate, working server machines into one massive single-point-of-failure server. Any kind of downtime for this super server will mean downtime for ALL of your services, not just a subset of them.

I would recommend using your budget to provision, say, two or three smaller servers instead. You can still use virtualization to partition your services into separate containers, both for security, and for ease of backup and migration.

Steven Monday
  • 13,599
  • 4
  • 36
  • 45
  • Decision to consolidate has already been made due to 1) really old hardware on old servers that is a large risk of failure; 2) other servers suck so much power it noticeably affects electric bill. Also, the 3 small VMs proposed, while internet-facing, are used in such a limited way that some occasional downtime on those is expected/not a big deal. – brianwc Oct 24 '10 at 22:01
  • I would recommend a combination of the following, in case you hadn't already had this in mind: 1. use drbd to mirror your vm's to another machine dedicated to backups. Or you could even build a fairly similar machine to start up the vm's if the first server goes down. 2. backup the vm images to somewhere else. I had a server that had lots of disks in it all go bad when the power supply (non redundant) blew. At the time I was only backing up data, but backing up the entire vm images would have made restoring so much easier. – senorsmile Dec 24 '10 at 07:38
0

I dont like your discs. Looks like totally wrong focus.

I will have two 120GB SSD drives in a RAID mirror and 2 2TB SATA II drives in a RAID mirror.

Assuming you use the 120gb for operatoing system and the 32tb for the virtual machiens - welcome to sucking IO. Ok, granted, your server is small.

Anyhow, here some of my servers:

  • 10gb AMD based Hyper-V (fits 16gb, we did have BIOS problems). The OS+Discs are on a RAID 10, Adaptec, 4x 320gb Black Scorptio. IO load is BAD. I feel it being overloaded. It gets an upgrade now to 16gb, but the number of VM's will be reduced - too much IO load during patching etc.

  • 64gb, 8 core AMD opterons. I had a 4x300gb Velocirraptors i na RAID 10 on it. Was getting full and I WAS FEELING THE LOAD. Really feeling it. I just upgraded to 6 raptors in a RAID 10 and may go higher. This server had a number of database server on it, but they pretty much all have separate discs for the db work. The RAID controller is an Adaptec 5805 on a SAS infrastructure.

As oyu can see - your IO subsystem is really bad. Memory overcommit will just make it a LOT worse. SSD can work nicely, but are way too pricy still. If you put the VM's on the 2tb drives, your IO will just suck. They likely have around 250 IOPS or so each - compared to the 450 I mearsured on my raptors, and as I said, I use a lot of them AND they are on a high end raid controlller.

I got a nice SuperMicro cage with 24 disc slots for the larger server ;)

TomTom
  • 51,649
  • 7
  • 54
  • 136
  • I currently have capacity for six SATA, and this plan uses 4, so I could add two more 120GB SSDs to create a RAID10 if you think that would dramatically increase IO performance, but as you say, that's expensive, and here would increase the hardware cost by over 15%. – brianwc Oct 24 '10 at 22:04
  • SSD hhave a LOT more IOPS capacity than normal discs - depending on what you pay and buy you get about 400 IOPS from a velociraptor and I know of SSD able to do about 40.000 (!) IOPS. They are expensive, though, this is why I go with Valociraptors, not even SAS drives. Best bang for the buck. 2gb drives are just slow - lots of space, little IOPS capacity (and that is all that talks about speed in virtualization). You have to plan about what you need and want - a LOT depends on the servers you will run. – TomTom Oct 25 '10 at 05:51
  • 2
    Most important thing: do avoid swap by all costs. Give the machines good RAM and do not overcommit RAM. Swap kils your IOPS budget faster than you can say "shit". Really. Do not turn off swapping (it makes sense to swap out unussed stuff) but make sure the machines are not starving for memory. Memory is CHEAP. – TomTom Oct 25 '10 at 05:52
0

Here is your available resources:

  • 8 Cores
  • 12GB RAM
  • 120GB SSD Storage
  • 2TB Sata Storage

A few thoughts come to mind with your plan:

  • First off 12GB of RAM?... Spend more $ and get more RAM!
  • I would consider running the "Host System" from a separate small SSD (Say 32GB's) or the sata drives assuming all the host does is run KVM. My main reason for this is because I would want to directly pass the entire 120GB SSD directly to your workhorse VM.
  • I also use "CPU Pinning" for my main VM (Pin an entire CPU since you have two!)
  • I also use RAM "HugePages" which basically reserves the RAM only for that VM, like CPU Pinning but for RAM
  • I would want to give every VM at least 1 Core w/ 2 Threads for the small VM's and 4Gb Ram
  • Swap shouldn't be needed, if your using swap a lot your system is underpowered. It's there as a last resort and RAM is cheap enough now it shouldn't be needed
  • I would be fine giving the "Host System" a small amount of storage as long as it has access to the 2TB Sata drives for doing backups etc.
  • Overcommitting is the way to go IMO as the hypervisor can then allocate as needed, but if your bottlenecking often then you may want to consider tightening up your resources to allow for priority processes to run smoother.
  • Finally, I realize you and your work is likely more familiar with Debian based OS's but it's not that hard to jump to another Linux Distro if you understand Linux. You just swap 'apt-get' for 'yum' or 'dnf' and a few files are located in different places, but google will help you. My main reason for saying this is I will always want to use a RHEL based distro to run KVM since RHEL develops KVM. I personally use FEDORA. KVM is new IMO and I have found reasons/improvements that only Fedora had that other OS's were still working on importing.
  • 8GB's Storage for a Linux OS is really small. I would want 32GB's. Sata storage is cheap. The best price point at the moment seems to be 8TB drives, but that might be overkill, regardless, 8GB's is small.
  • Find some sort of monitoring solution that can alert you to issues like RAM/CPU/Storage bottlenecks. I like XYMon but I've looked into Zabbix as well.

Enjoy KVM! Make sure to backup your Domain.XML's and a copy of each VM offsite preferably.

FreeSoftwareServers
  • 515
  • 1
  • 8
  • 26