Unfortunately I'm still dealing these days with the deployment of VMware Server 2.0.2 hosted on Ubuntu Linux 10.04 LTS. Internal testing has shown big problems when running on a 64bit host but everything's fine on a 32bit host and that's what I have to use.
While I'm used to deploy Xen guests on bare blocks devices (usually in the form of LVM2 volumes), VMware Server uses files on the host as storage backend. I am using a logical volume for the /var/lib/vmware
mountpoint.
I seem to remember reading an article about getting better performance from "simpler" filesystems for such situations, and its reasoning made sense to me. It would leave protection from corruption to VMware's syncing of every block ("optimize for safety" in the virtual disk configuration) instead of leveraging journaling filesystems or stuff like that.
That could suggest ext2 actually makes sense during regular use and it could provide the best performance - I have not tested this and am just guessing. The problem with ext2 lies in fsck - it would take ages compared to a journaled filesystem.
We then revert to our beloved ext3 and/or the newer ext4, but which one of the two? with what options?
Has anyone done, or found somewhere, some testing on filesystems used for /var/lib/vmware
? Do you have any recommendation?
EDIT: this specific system uses 4x 7200rpm disks on a raid5 hardware controller with writeback battery-backed cache, if that matters.
2nd EDIT: I cannot change host hardware details, including raid setup :(