Yes, it will take the load. In the world of virtualisation - in my experience at least - CPU is very rarely the bottleneck. I'd rate (prioritise) the physical aspects in this order:
- Quantity of RAM
- Speed of RAM
- Available disk I/O
- CPU core count (note, no mention of speed!)
- Network I/O
We have some pretty meaty virtualised implementations (e.g.: large Oracle DBs, real-time systems, Etc.), and I can't think of a time where CPU load has been an issue.
Once you've got your RAM sorted (and let's face it, RAM is cheap compared to only a few years ago), you'll probably see disk I/O as your next bottleneck. We use a mix of HP EVA and 3PAR SANs, and we've definitely had times where the EVAs begin to creak. This is where things like pathing and LUN balancing come into play. Of course, eventually, you hit the ceiling, and nothing can be done (thus the mention of 3PAR in my case).
As for hard and fast rules - this is difficult. If your server was consistently using a given amount of CPU "bandwidth", then, yes, you could probably formalise this into some kind of equation. However, this is rarely the case, and servers typically call on CPU resource in a much more "random" fashion, if you know what I mean. If you have two VMs with "complimentary" CPU requirements, then there won't be any issues at all. Same goes for 'n' VMs. However, if you have a batch run that floors your existing physical servers between 8pm and 12am every night, then this is going to nail the same number of cores in the virtual world. In this scenario, you just need to ensure that you have sufficient cores available. Unlikely scenario, though.
Good luck!