I'm interested in modelling various server/network configurations for a web application. I'd like to know in advance which parts of the system are going to be bottlenecks, and whether the bottlenecks are bound by CPU/Memory/Network etc.
One think I've been thinking about is taking a single test server, and setting up each 'real' server as a virtual machine on this, configured as they would be in the wild. I'm going to try this, but wanted to ask the serverfault comunity if anyone has tried this approach before. Is it viable?
I'm not expecting benchmarks, or anything like that of course, but am thinking it might be useful for modeling relative performance, highlighing bottlenecks, and providing a sanity check on the architecture.