7

Similar to Concurrent FTP access.

How is concurrent file access handled for NFS? Say that one client is updating/overwriting a file on a NFS server, and a process on the server is reading that same file directly from the file system at the same time. Is there some sort of atomic handling of file read/write in NFS/Linux or do I have to work with tmp files to ensure data consistency?

I'm worried that the process reading the file will get corrupt data.

Kristian
  • 195
  • 1
  • 1
  • 6

3 Answers3

4

Specific daemons (rpc.statd and rpc.lockd) help with OS-level locking, but in general, you don't want to rely on it, and as Josip writes, many Unix applications implement their own application-level locking.

If you're going to have write contention on files, standard practice is not to serve such files over NFS in the first place.

reinierpost
  • 412
  • 3
  • 9
  • So FTP is better? – Kristian Sep 01 '09 at 12:16
  • Who said anything about FTP? Files that are potentially modified by multiple people at similar times are best controlled by a Revision/Document control system. NFS isn't going to be any better/worse then CIFS/FTP/SFTP, it's usually up to the app to "lock" files or otherwise notify users that someone else has their dirty little fingers in a file. – prestomation Sep 01 '09 at 12:50
  • Ok, I was out of context there... The question isn't really about files being modified by multiple users, the scenario is rather one client process creating/updating files and a server process reading the same file. One solution I was offered was that the clients writes to a tmp file and then renames it when done (in context of FTP transfer). How would that work in the NFS case? Should I be worried in client cache etc...? – Kristian Sep 01 '09 at 13:17
  • I think it would work fine! – reinierpost Sep 14 '09 at 08:03
2

These conflicts are usually resolved through locks. It is upon application to ensure proper locking. That said, it needs to be noted that most of applications do tend to lock files, especially during writes.

Josip Medved
  • 1,734
  • 2
  • 18
  • 18
  • Not really. First of all, from a design perspective, locking really is a last resort measure. Second, locking isn't much of a tradition on Unix-like systems, many applications don't do it; I've never seen Windows-style global mandatory locking of files. Third, applications that do lock files may use different mechanisms, so there is no unique way to verify whether a file is locked or not to write to it while another application believes it's locked. – reinierpost Aug 15 '16 at 20:43
  • On windows files are generally locked by default. When a file is open you cannot delete it or write to it. I think it is up to the developer to set what kind of locking it wants. – Archimedes Trajano Mar 22 '18 at 18:35
1

NFS implements something called close-to-open consistency, which is a weak cache coherency model. See section 9.3.1 in the NFS 4 RFC.

In other words, when the client that has been modifying the file closes the file, the client will flush the written data to the server. If some other client opens the file after that, it will see the new content. Or if the other "client" is a local process on the server, it will see it immediately, no need to reopen.

If you need more fine-grained control over caching than that, you need to use byte-range locks. Again, see section 9.3.2 in the NFS 4 RFC. In that case, a NFS client will flush data when releasing a write lock, and revalidate its cache when acquiring a lock.

janneb
  • 3,841
  • 19
  • 22