1

Can we, by using UNIX pipes for process synchronization, be led into starvation? For example:

void pipesem_wait(struct pipesem *sem)
{
    char onebyte = 'A';
    if ( read( sem->rfd, &onebyte, sizeof( onebyte ) ) != sizeof( onebyte ) ) {
        perror( "read from pipe" );
        exit( 1 );
    }
}

This is how we read from the pipe.. When multiple processes want to read from this pipe, is it sure that all the requests will be handled in a particular (e.g. FIFO) order, or even it may never happen there is still a possibility of starvation?

1 Answers1

1

You can't guarantee which of many reading processes will read the data, but you can guarantee that exactly one of them will read each byte. A pipe is really just a shared buffer inside the kernel, with different filehandles to read and write it - if multiple processes share these handles, it's up to the scheduler to decide which one gets the data.

So if you mean starvation as in one or more processes won't read any data, that's quite possible - if the data is being written slow enough that one process can consume it as fast as it's being written then the other processes may not see any data. On the other hand, it may go round-robin across all the processes - it just depends how the scheduling occurs. You cannot rely on either case, and the behaviour may depend on the Unix flavour, even the version of that flavour and the hardware on which it's running.

However, you can rely on the data all being consumed with none being lost, and you can rely on the data being read out in FIFO order. However, imagine a write of "ABC" is done and process 1 reads it, and then a write of "DEF" is done and process 2 reads that. There's no guarantee that these processes will be scheduled such that process 1 finishes processing its input before process 2. So, although the order of reading data from the pipe is FIFO, after that it's again up to how the processes are scheduled.

As the first commenter below points out, it's also worth mentioning that write() calls to a pipe are atomic as long as you're writing below PIPE_BUF worth of data (that's 512 bytes on my Linux system, for example - check in limits.h for your equivalent). This guarantees that the block of data won't be interleaved with data from any other process writing to the same pipe. If you go above this limit then the standards don't specify whether the write() will be atomic or not. Also remember that for large blocks of data you may get a partial write, which you should always handle. See the SO question and answer that the commenter linked for more information.

However, you appear to be reading data one byte at a time, so I'd surmise that you're also only writing it one byte at a time and using this as some sort of process synchronisation mechanism. You might like to consider instead using shared memory with pthreads condition variables, which can be a more elegant way to achieve the same goal - I had some old demo code for this which I've put online here.

Note: if portability is important to you then you may wish to stick to pipes - I suspect they're more likely to work on the widest variety of platforms. The pthreads approach should be fairly portable, but shared memory is probably a little less so.

In short, if you're using this to wake up worker processes from a pool and you don't care which process is used, pipes work fine as an IPC mechanism. If you're hoping to wake up a specific process, however, you'll need to use a pipe for each worker, or some other mechanism. For example, using pthreads condition variables you can wake up every process waiting by calling pthread_cond_broadcast() instead of pthread_cond_signal().

Does that answer your question?

Community
  • 1
  • 1
Cartroo
  • 4,233
  • 20
  • 22
  • Good answer. Also see [Atomic write on an unix socket?](http://stackoverflow.com/questions/4669710/atomic-write-on-an-unix-socket) – thuovila Jan 30 '13 at 20:18