1

I am a newbie to system programming please mind me if my doubt is very vague.

I read that inbuilt user-space buffers are used so that we can access block sized data through a system call via the kernel which takes a huge over head but in the user-space we can make minute access to small sized data.I understood how this method is efficient but what i did not understand is,since these user-buffers are pertained to each single process that opens the file.

How will a process recognize the minute changes that are made to a file when both are simultaneously accessing the file.

Does this not create a problem as processes will access the old data but not the data that is changed by some other process which is still in the user-space buffer.

Please mind me if any mistakes.

jack wilson
  • 145
  • 1
  • 13
  • Even if the buffers are a single `char`, you still have a race. If multi-process file manipulation is needed, then [other mechanisms are available](http://linux.die.net/man/2/flock). Absolutely this is an issue, but it is not to be solved at the kernel level as it would make the normal case very in-efficient. – artless noise Dec 16 '14 at 18:13
  • ya..But how do processes manage to solve such issues?Like if two processes accesses a random file in our system and are constantly updating it ,will this not be a huge problem? – jack wilson Dec 17 '14 at 02:59
  • This is exactly what [`flock`](http://linux.die.net/man/2/flock) that links above is for. Both processes must use it to get exclusive access. Other mechanisms are to create a temporary file, copy the data, update and then do an atomic `rename` to the original. It really depends on the use case and you must think about it carefully. You can not have processes accessing random files, reading/writing randomly and expect data to be consistent. Buffers with-in the process can contain stale data, no matter what the kernel does. – artless noise Dec 17 '14 at 15:32
  • Okok i just thought that as a flaw but didnt look through practical solution as how they do...Thnx for the info btw.. @artless noise – jack wilson Dec 18 '14 at 16:31

1 Answers1

1

Yes, this happens. Inter-process race conditions are present, and can create weird behavior when IO is buffered. Generally though, input isn't buffered that much (usually only a line at a time) so it isn't that bad. There are ways to get around it though through a function called mmap.

In some cases, you want multiple process to communicate so we use pipes/sockets or some other form of IPC (inter-process communication).

Although, to answer what seems to be your main problem: no, this doesn't really create a big problem. Why are two processes operating on the same file if they aren't meant to be aware of one another while they do it? It just isn't that common, and when it is, something weird happens and the user instinctively runs one program at a time and it's fixed.

randomusername
  • 7,927
  • 23
  • 50
  • what i meant was ..A common file accessed by both processes.And when one writes on it instantaneously other reads it.But the second process is unable to read the updated data.Is that not a problem? – jack wilson Dec 15 '14 at 05:07
  • @jackwilson That depends, but generally for a program like `cat`, no. In fact, I can't think of a single coreutils program that would be negatively affected by this. – randomusername Dec 15 '14 at 05:14