I have a server system running Ubuntu 12.10 with 12 disks attached. I am sharing all of these disks on my 10 gigabit network using NFSv4. However, I am getting generally poor performance over NFS compared to the performance I am able to get locally. The general solution that I have come across in my research for poor NFS performance is to use the async option in the exports file of the server instead of sync. However, this is simply not an option for my purposes. I understand that this will introduce a performance hit, but I would not expect to the extent that I am seeing.
I find that the more disks I actively use on the NFS client, the worse my per disk throughput is. For example, if I actively use only 1 disk, I am able to write at 60MB/s. However, if I actively use all 12 disks, I am able to only write at 12MB/s per disk. Equivalent local tests can yield 200MB/s per disk no problem. Are there perhaps some tweaks that can be made to optimize multiple disk NFS performance? It does not appear that either the CPU or memory are being utilized very much while the server is being actively used.