0

I have a nfs server that was working correctly with a share on /exports/something. However, when I mount /exports/something on /dev/mapper/something (a lvm with more space) the nfs does not work.

The nfs server is a centos7.

The client of the nfs is a debian8.

/etc/fstab:

 <file system> <mount point>   <type>  <options>       <dump>  <pass>
/dev/mapper/centos-something  /exports/something   xfs   defaults 0 0

The nfs client was binding to /exports/something with no problem before /exports/something became a mount on the lvm. If I unmount the lvm, the nfs share starts working again, if I remount /exports/something on the lvm the nfs re-stop working (but the lvm works).

How can I have the nfs client bind to the lvm mount on the nfs server?

When it doesn't work, the server have all daemons running but the client have only the files on his side. There is no log entry for the nfs.

I want to switch the nfs mount so that it is on the lvm.

Nfs version on the debian8 client:

nfs-common                        1:1.2.8-9 

Nfs version on the centos7 server:

libnfsidmap.x86_64   0.25-11.el7                 @base      
nfs-utils.x86_64     1:1.3.0-0.8.el7             @base 
jul
  • 15
  • 3
jul
  • 1
  • 1
  • 2
    What does "does not work" mean? Error messages? Log file entries? Anything? – Sven May 30 '15 at 14:43
  • There is no log entry for the nfs. The server have all daemons running but the client have only the files on his side. – jul May 30 '15 at 15:21
  • 1
    it's unclear from the question: do you unmount the old share before mounting the lvm (are you effective switching the source of the mount or are you trying to have both mounted at the same time - which might be the source of the problem)? - may seem silly, but I just checked that such parallel mounts are allowed... – Dan Cornilescu May 30 '15 at 17:02
  • which version of NFS on both client and server? – aif May 31 '15 at 10:52
  • you should mount localy, then export, otherwise mount will hide directory's content. – Archemar Jun 08 '15 at 14:46

2 Answers2

1

This is probably because you did not restart the NFS server when mounting a filesystem. The NFS server will take a handle to the filesystem on which the exported directory exists; if you change that by adding a mountpoint, the NFS server will not notice and needs a kick in the butt. This is required for the protocol, BTW, because for some operations, NFS encodes inodes into the network protocol. Note that this means that if you have clients with open files or open locks (or similar) when you try to do this, Bad Things(TM) will happen. So don't do that :-)

A similar problem will also show up if you export a filesystem which has a subdirectory that is a mount point; e.g., if you export /srv/nfs through NFS and have a filesystem mounted on /srv/nfs/stuff, then unless you explicitly add /srv/nfs/stuff to /etc/exports, this won't show up either. The reason is, again, inodes showing up in the protocol. You can work around that by using the nohide export option, but there are a few gotchas with that method. Rather than trying to reproduce the documentation, I'd suggest you go read the man page (man 5 exports) and search for nohide there.

Wouter Verhelst
  • 438
  • 3
  • 8
0

Maybe it is a selinux-related problem. Try to issue setenforce 0 and to restart the nfs server. Then, try to re-mount your share clientside.

shodanshok
  • 47,711
  • 7
  • 111
  • 180
  • Selinux is already disabled on the centos server. – jul May 30 '15 at 21:17
  • Jul, if you are having problems commenting on or editing your post because you've registered twice, you should [contact the admin staff](http://serverfault.com/contact), and ask for your accounts to be merged. – MadHatter May 31 '15 at 06:50