Our server shows excessively high 'access' rates in the NFS clients, which makes the NFS slow. Could you provide any suggestions on what we should investigate and how to resolve it? I'm attaching the nfsstat output at the NFS client, /etc/fstab of the client, and /etc/exports of the server.
root@g0:~# nfsstat
Client rpc stats:
calls retrans authrefrsh
6498703 0 6499048
Client nfs v4:
null read write commit open
5 0% 163950 2% 911 0% 1 0% 4803 0%
open_conf open_noat open_dgrd close setattr
0 0% 2590 0% 0 0% 3919 0% 500 0%
fsinfo renew setclntid confirm lock
15 0% 0 0% 0 0% 0 0% 0 0%
lockt locku access getattr lookup
0 0% 0 0% 6304625 97% 7592 0% 5179 0%
lookup_root remove rename link symlink
5 0% 976 0% 397 0% 23 0% 0 0%
create pathconf statfs readlink readdir
125 0% 10 0% 27 0% 118 0% 399 0%
server_caps delegreturn getacl setacl fs_locations
25 0% 2345 0% 0 0% 0 0% 0 0%
rel_lkowner secinfo fsid_present exchange_id create_session
0 0% 0 0% 0 0% 11 0% 16 0%
destroy_session sequence get_lease_time reclaim_comp layoutget
7 0% 128 0% 7 0% 9 0% 0 0%
getdevinfo layoutcommit layoutreturn secinfo_no test_stateid
0 0% 0 0% 0 0% 5 0% 0 0%
free_stateid getdevicelist bind_conn_to_ses destroy_clientid seek
0 0% 0 0% 0 0% 0 0% 0 0%
allocate deallocate layoutstats clone
0 0% 0 0% 0 0% 1 0%
# /etc/exports: the access control list for filesystems which may be exported
# to NFS clients. See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
#
# Example for NFSv4:
# /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check)
#
/home 10.10.10.0/24(rw,sync,no_all_squash,no_root_squash,no_subtree_check)
/root 10.10.10.0/24(rw,sync,no_all_squash,no_root_squash,no_subtree_check)
/proj 10.10.10.0/24(rw,sync,no_all_squash,no_root_squash,no_subtree_check)
/workspace 10.10.10.0/24(rw,sync,no_all_squash,no_root_squash,no_subtree_check)
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/ubuntu-vg/ubuntu-lv during curtin installation
/dev/disk/by-id/dm-uuid-LVM-q5DcFjbshxTDlCuacLnsd3IFvJQOfS4SODLBIwBPX8cZdUgBMwL90aXZJUMgf1iz / ext4 defaults 0 1
# /boot was on /dev/nvme0n1p2 during curtin installation
/dev/disk/by-uuid/2a559d14-d24f-4e83-9467-032a1bd81887 /boot ext4 defaults 0 1
/swap.img none swap sw 0 0
10.10.10.1:/home /home nfs defaults 0 0
10.10.10.1:/root /root nfs defaults 0 0
10.10.10.1:/proj /proj nfs defaults 0 0
10.10.10.1:/workspace /workspace nfs lookupcache=none 0 0
10.10.10.2:/cabinet /cabinet nfs defaults 0 0
Two peculiar behaviors were observed:
- root shows higher NFS performance than users (might be reasonable because 'access' indicates the number of permission checks)
- The NFS performance for users is faster on the first login after reboot, and drops after the second login.