10

Is there a way to create non blocking/asynchronous named pipe or something similar in shell? So that programs could place lines in it, those lines would stay in ram, and when some program could read some lines from pipe, while leaving what it did not read in fifo? It is also very probable that programs can be writing and reading to this fifo at the same time. At first I though maybe this could be done using files, but after searching a web for a bit it seems nothing good can come from the fact that file is read and written at same time. Named pipes would almost work, just there are two problems: first they block reads/writes if there is no one at the other end, second even if I let writing to blocked and set two processes to write to pipe while no one is reading, by trying to write one line with each process, and then try head -n 1 <fifo> I get just one line as I need, but both writing processes terminate, and second line is lost. Any suggestions?

Edit: maybe some intermediate program could be used to help with this, acting like mediator between writers and readers?

morphles
  • 2,424
  • 1
  • 24
  • 26
  • 2
    You could do something like `mkfs /dev/ram1 1048576` (or a bigger number if you want) and mount `/dev/ram1` anywhere then. That is probably as close to "nonblocking" as you can get. It will, of course, by default not at all be nonblocking, just very fast (but a named pipe won't be either, by default). Nonblocking operation is something that a program needs to set on the file descriptor. – Damon Jun 27 '11 at 08:04
  • I though about this option, create file in tmpfs or similar, but then again problem of writing and reading at the same time persists. More like writing and writing since its file. One program write to end of file, other has read some info from start and now needs to delete first lines, so writing at the end and deleting at the front at the same time, I could not find solution to this, I would have used this. – morphles Jun 27 '11 at 08:13
  • 1
    It's hard (if not impossible) to do such a thing without writing a program. Something that might _almost_ work: Renaming is atomic, so one could have each producer write each separate task to an individual tempfile, close the file and rename it according to some "well-known" pattern. Each consumer could rename the next file to something random (so another consumer won't pick it up), read the contents, and delete the file. Though that would only work on a per-task level (whatever the producers write as one unit), not on a per-line level. – Damon Jun 27 '11 at 09:23
  • Writing a small program that connects to two pipes (one reading and one writing) and does some buffering on its own is really out of the question? Basically something like `tee` with a copy to a private mapping. This would probably do what you want. – Damon Jun 27 '11 at 09:26
  • Yeah I though about using other programs to add buffering, but maybe there is some programs that could already be used for this. – morphles Jun 27 '11 at 09:50

1 Answers1

5

You can use special program for this purpose - buffer. Buffer is designed to try and keep the writer side continuously busy so that it can stream when writing to tape drives, but you can use for other purposes. Internally buffer is a pair of processes communicating via a large circular queue held in shared memory, so your processes will work asynchronously. Your reader process will be blocked in case the queue is full and the writer process - in case the queue is empty. Example:

bzcat archive.bz2 | buffer -m 16000000 -b 100000 | processing_script | bzip2 > archive_processed.bz2

http://linux.die.net/man/1/buffer

  • Thank you for pointing out program that i didn't knew, seems interesting and might be useful in some cases. Though its not exactly what I wanted and I will not be able to do what I want with this i have decided that most likely I'll implement my own program/daemon for cases like mine. – morphles Jul 14 '11 at 10:38
  • Another solution would be to use queue daemon, something like gearman. Your "lines" could persist in memcache as jobs. – Ramunas Dronga Jul 19 '11 at 06:16