-1

NFS v4 with fast network and average IOPS disk. Load increase high on large file transfer. The problem seems to be IOPS.

The test case:

/etc/exports
server# /mnt/exports    192.168.6.0/24(rw,sync,no_subtree_check,no_root_squash,fsid=0)
server# /mnt/exports/nfs        192.168.6.0/24(rw,sync,no_subtree_check,no_root_squash)

client# mount -t nfs 192.168.6.131:/nfs /mnt/nfstest  -vvv
(or client# mount -t nfs 192.168.6.131:/nfs /mnt/nfstest -o nfsvers=4,tcp,port=2049,async -vvv)

It works well wits 'sync' flag but the transger drops form 50MB/s to 500kb/s

http://ubuntuforums.org/archive/index.php/t-1478413.html The topic seems to be solved by reducing wsize to wsize=300 - small improvement but not the solution.

Simple test with dd:

client# dd if=/dev/zero bs=1M count=6000 |pv | dd of=/mnt/nfstest/delete_me




server# iotop
  TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND                                                                                                    
 1863 be/4 root        0.00 B/s   14.17 M/s  0.00 % 21.14 % [nfsd]
 1864 be/4 root        0.00 B/s    7.42 M/s  0.00 % 17.39 % [nfsd]
 1858 be/4 root        0.00 B/s    6.32 M/s  0.00 % 13.09 % [nfsd]
 1861 be/4 root        0.00 B/s   13.26 M/s  0.00 % 12.03 % [nfsd]

server# dstat -r --top-io-adv --top-io --top-bio --aio -l -n -m
--io/total- -------most-expensive-i/o-process------- ----most-expensive---- ----most-expensive---- async ---load-avg--- -NET/total- ------memory-usage-----
 read  writ|process              pid  read write cpu|     i/o process      |  block i/o process   | #aio| 1m   5m  15m | recv  send| used  buff  cach  free
10.9  81.4 |init [2]              1    5526B  20k0.0%|init [2]   5526B   20k|nfsd         10B  407k|   0 |2.92 1.01 0.54|   0     0 |29.3M 78.9M  212M 4184k
1.00  1196 |sshd: root@pts/0      1943 1227B1264B  0%|sshd: root@1227B 1264B|nfsd          0    15M|   0 |2.92 1.01 0.54|  44M  319k|29.1M 78.9M  212M 4444k
   0  1365 |sshd: root@pts/0      1943  485B 528B  0%|sshd: root@ 485B  528B|nfsd          0    16M|   0 |2.92 1.01 0.54|  51M  318k|29.5M 78.9M  212M 4708k

Do You know any way of limiting the load without big changes in the configuration?

I do consider limiting the network speed with wondershaper or iptables, though it is not nice since other traffic would be harmed as well.

Someone suggested cgroups - may be worth solving - but it still it is not my 'feng shui' - I would hope to find solution in NFS config - since the problem is here, would be nice to have in-one-place-solution.

If that would be possible to increase 'sync' speed to 10-20MB/s that would be enough for me.

Cœur
  • 37,241
  • 25
  • 195
  • 267
sirkubax
  • 885
  • 2
  • 10
  • 19

1 Answers1

0

I think I nailed it:

On the server, change disk scheduller:

for i in  /sys/block/sd*/queue/scheduler  ; do echo deadline > $i ; done

additionally (small improvement - find the best value for You):

/etc/default/nfs-kernel-server
# Number of servers to start up
-RPCNFSDCOUNT=8
+RPCNFSDCOUNT=2

restart services

/etc/init.d/rpcbind restart
/etc/init.d/nfs-kernel-server restart

ps: My current configs

server:

/etc/exports
/mnt/exports    192.168.6.0/24(rw,no_subtree_check,no_root_squash,fsid=0)
/mnt/exports/nfs        192.168.6.0/24(rw,no_subtree_check,no_root_squash)

client:

/etc/fstab
192.168.6.131:/nfs /mnt/nfstest nfs rsize=32768,wsize=32768,tcp,port=2049 0 0
sirkubax
  • 369
  • 1
  • 3
  • 9