i'm running mix of esxi and vmware servers under debian linux. i dont have san, just raided local drives, my biggest concern is 'backupubility', i dont need very high performance nor i suffer from some slowdowns [ at least until now ]. all runs under mix of dell poweredge 1950 and poweredge 2950.
for hosts running under esxi i run partial backups from within virtual machines, no rocket science here, all works well and in stable fashion.
for vmware server - i keep virtual machines on lvm partition and use lvm snapshot feature for 'hot backups' [ that is - without interrupting virtual machines ]. i DO KNOW that this is unsupported mechanism that can lead to unrecoverable backups but with additional backups from within vm guests - i feel comfortable with that thought. also so far i've recovered complete vms dozens of times without any problems.
for vmware server i've done some host and guest tuning:
- vmware keeps it's temporary files on /dev/shm
- i've turned off memory trimming / over-committing for guests
- i keep vmware tools installed under guests
- for linux guests i've elevator set to noop, kernel clock set to no_hz
- i've done some common-sense optimization for windows guests including turning off unnecessary services, disabling screensavers and hibernation etc.
i'm quite happy with the setup. i'm using vmware server since v 1.0 without much hiccups.
i've noticed that usage on host system of vm with 2003 grows steadily over time. reboot of vm does not solve the problem so every ~2 months i shut down such vm completly and boot it up again. symptoms suggest it's problem with vmware rather than windows itself.
but it all depends on your workload, in my case i have quite lo disk i/o requirements for guests, this is performance bottleneck that i expect to hit sooner or later.
hardware advice? all depends on workload; take:
- 1x quad core or 2x quad cores
- plenty of memory [ 16-32 GB are cheap nowadays ]
- if you need io - take 4-8 disks in raid 10