2

We have a Xen server where the dom0 acts as both vm host and storage host. Nothing special about the server, the storage is an internal RAID array. No failover partner or anything.

Each of the domUs have their own lvm volumes for local storage, and then share partitions like /home over NFSv4

Given that the storage is all physically local to the machine, it would seem inefficient to push this over NFS, just to enabled shared access to a volume.

Can anyone recommend or suggest an alternative way of sharing a volume that is local to the host and VMs (effectively)?

Edit: and if NFSv4 is the best approach, again given the VMs are co-located and the virtual nature of the network connectivity between dom0 and domU, is there a well-known ideal set of mount options to maximise performance (in a general usage scenario with a variety of transaction types) or is it still a case of wheeling out bonnie and running tests?

Paul
  • 1,288
  • 13
  • 25
  • Thanks for the answers, I was half expecting a different response, so I have added the extension to make this question useful for anyone else with the same scenario. – Paul Jul 26 '11 at 08:58

2 Answers2

2

NFSv4 is as good as it gets. There was a project called XenFS which looked promising, but never reached a stable release. I has been dormant for a couple years now. I'm not sure you can find the source anymore.

h0tw1r3
  • 2,776
  • 19
  • 17
  • XenFS is/was a research project by Mark Williams. The page is still up at the [old-wiki](http://wiki.xen.org/old-wiki/xenwiki/XenFS.html). The last news from Mark about it was in [March 2009](https://blog.xenproject.org/2009/03/26/status-of-xenfs/). – adam Nov 26 '15 at 16:53
1

As h0tw1r3 says, NFS is as good as you're going to get. You also don't want to tie yourself to anything that's going to require all the VMs to be co-located, because that seriously inhibits your ability to scale those VMs. If they start to get big, and you decide that it'd be nice to spread them out over a few physical hosts (either for capacity or redundancy), a dom0-local filesystem is going to put a severe crimp in that plan.

Inefficiency doesn't matter (except when it does), and attempts to worry about before you need to is just premature optimisation.

womble
  • 96,255
  • 29
  • 175
  • 230
  • Well I have one VM that does a lot of compressing and decompressing archives. Doing this across nfs introduces a lot of overhead and led to the question. I am considering moving the archives to local, decompressing then moving them back after, which seems defeatist. – Paul Jul 26 '11 at 09:00
  • It's not defeatist, it's correct engineering. Localised data access is always a win. I'd possibly look at whether the work can be done permanently localised, to avoid having to copy it around, but I'm a bit down on network filesystems and prefer more intelligent solutions (see http://serverfault.com/questions/286910/setting-up-a-rails-2-3-x-app-on-ec2-for-easy-scalability/286972#286972 for an example) – womble Jul 26 '11 at 09:41
  • It is only correct engineering if your starting assumptions match the use case. I don't think there is enough detail here to warrant that assumption. Then again, I am a bit down on localised solutions and prefer more intelligent centralised networked solutions. – Paul Jul 27 '11 at 04:17
  • Then you are yet to achieve enlightenment, young grasshopper. Localising as much as possible is *great* for performance and reliability, whether it's avoiding cache line misses or avoiding having to stream data across a network in preference to reading it off fast local disk. – womble Jul 27 '11 at 08:47
  • I agree with you, I *am* ahead of my time. In a few years the idea of local storage will seem quaint and absurd. However, I will make do until everyone catches up. – Paul Jul 27 '11 at 14:35
  • I'd recommend getting your head out of the cloud(s). – womble Jul 27 '11 at 23:22