0

I have a storage server, running Rocky 8, and multiple clients, using Rocky 8, CentOS 7, and Debian 10/11, that connect via NFS with different exports for different file systems. One of the file systems is ZFS the other is not. This works fine with NFSv3 as I can put the various directories to export in /etc/exports and here is the /etc/exports for that.

/hpc/projects 10.0.20.100/32(rw,crossmnt,async,no_root_squash,no_subtree_check) 10.33.4.0/22(rw,crossmnt,nohide,async,no_root_squash,no_subtree_check) 10.34.4.0/22(rw,crossmnt,nohide,async,no_root_squash,no_subtree_check) 
/hpc/projects/subdir1 10.0.20.101/32(rw,crossmnt,async,no_root_squash,no_subtree_check)
/hpc/projects/subdir2 10.0.20.102/32(rw,crossmnt,async,no_root_squash,no_subtree_check)
/hpc/projects/subdir3 10.0.20.103/32(rw,crossmnt,async,no_root_squash,no_subtree_check)

/scratch 10.33.4.0/22(rw,async,no_root_squash,no_subtree_check) 10.34.4.0/22(rw,async,no_root_squash,no_subtree_check)

However with NFSv4 the CentOS 7 clients cannot connect. So far I have tried to set both the ZFS mountpoint, /hpc, and a separate directory, /export, as the root filesystem. Both fail when trying to mount them from a CentOS client, IP 10.34.4.50, as NFSv4 but work as NFSv3. Here is what /etc/exports looks like for NFSv4 with the ZFS mount as the NFS root.

/hpc *(fsid=0,ro,insecure)
/hpc/projects 10.0.20.100/32(rw,crossmnt,async,no_root_squash,no_subtree_check,insecure) 10.33.4.0/22(rw,crossmnt,nohide,async,no_root_squash,no_subtree_check,insecure) 10.34.4.0/22(rw,crossmnt,nohide,async,no_root_squash,no_subtree_check,insecure) 
/hpc/projects/subdir1 10.0.20.101/32(rw,crossmnt,async,no_root_squash,no_subtree_check,insecure)
/hpc/projects/subdir2 10.0.20.102/32(rw,crossmnt,async,no_root_squash,no_subtree_check,insecure)
/hpc/projects/subdir3 10.0.20.103/32(rw,crossmnt,async,no_root_squash,no_subtree_check,insecure)

/scratch 10.33.4.0/22(rw,async,no_root_squash,no_subtree_check) 10.34.4.0/22(rw,async,no_root_squash,no_subtree_check)

When mounting via NFSv4, mount -vvv -t nfs4 10.34.4.100:/ mnt, I get mount.nfs4: access denied by server while mounting 10.34.4.100:/. However when I try mount -vvv -t nfs4 10.34.4.100:/hpc mnt the mount completes, just not with NFSv4. Running mount shows this: 10.34.4.100:/hpc on /mnt type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.34.4.100,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=10.34.4.108) I'm seeing vers=3 and mountvers=3 leading me to believe that this is using NFSv3 and not NFSv4 like I told it to. I did check the servers /var/log/messages and found these lines when I attempt to mount with NFSv4(the IP listed is one of the clients IP).

Mar 30 10:56:12 servername rpc.mountd[14813]: refused mount request from 10.34.4.1 for / (/): not exported
Mar 30 10:56:15 servername rpc.mountd[14813]: refused mount request from 10.34.4.1 for /hpc (/): not exported

Note that the /scratch export mounts just fine with the NFSv4 directives in /etc/exports. Also all other clients, Rocky 8 and Debian, mount via NFSv4 just fine.

I'm hoping that someone can give me an idea as to what is going on. I will point out that SELinux is disabled on the server and the CentOS clients and no firewall is running on the server for the purposes of this test.

Chris Woelkers
  • 298
  • 2
  • 11

1 Answers1

1

Don't specify fsid=0 for /hpc. Doing this turns the directory into the "export root" for NFSv4, meaning that e.g. your /hpc/projects directory is exported as server:/projects instead of retaining its original path.

While NFSv4 allows you to create a virtual export root, there's no requirement to actually do so – the Linux kernel NFS server will automatically generate a virtual fsid=0 root at / if you don't define one. So as long as you do not include any fsid=0 entry, the shares will be mountable via both NFSv3 and NFSv4 using the exact same path.

user1686
  • 10,162
  • 1
  • 26
  • 42