67

I want to output some data to a pipe and have the other process do something to the data line by line. Here is a toy example:

mkfifo pipe
cat pipe&
cat >pipe

Now I can enter whatever I want, and after pressing enter I immediately see the same line. But if substitute second pipe with echo:

mkfifo pipe
cat pipe&
echo "some data" >pipe

The pipe closes after echo and cat pipe& finishes so that I cannot pass any more data through the pipe. Is there a way to avoid closing the pipe and the process that receives the data, so that I can pass many lines of data through the pipe from a bash script and have them processed as they arrive?

limovala
  • 459
  • 1
  • 5
  • 16
user1084871
  • 1,012
  • 1
  • 10
  • 12

6 Answers6

69

Put all the statements you want to output to the fifo in the same subshell:

# Create pipe and start reader.
mkfifo pipe
cat pipe &
# Write to pipe.
(
  echo one
  echo two
) >pipe

If you have some more complexity, you can open the pipe for writing:

# Create pipe and start reader.
mkfifo pipe
cat pipe &
# Open pipe for writing.
exec 3>pipe
echo one >&3
echo two >&3
# Close pipe.
exec 3>&-
Mark Edgar
  • 4,707
  • 2
  • 24
  • 18
  • 3
    Excellent answer, and actually succeeds in keeping the pipe open across an arbitrarily complex sequence of commands. – voetsjoeba Oct 08 '13 at 12:03
  • 7
    Could you briefly explain how this works? Especially that last line: `exec 3>&-` – Nick Chammas Jul 29 '14 at 15:33
  • 4
    @NickChammas: in the second example, the `exec 3>pipe` opens file descriptor 3 for writing to `pipe`; the two `echo` commands write to pipe by virtue of the output redirection; the last `exec 3>&-` is how you close an open file descriptor — descriptor 3 in this case. At that point, the `cat` running in the background gets an EOF and terminates. – Jonathan Leffler Mar 20 '15 at 15:53
59

When a FIFO is opened for reading, it blocks the calling process (normally — unless there is already a process with the FIFO open for writing, in which case, any blocked writers are unblocked). When a process opens the FIFO for writing, then any blocked readers are unblocked (but the process is blocked if there are no readers). When the last writer closes the FIFO, the reading processes get EOF (0 bytes to read), and there is nothing further that can be done except close the FIFO and reopen it. Thus, you need to use a loop:

mkfifo pipe
(while cat pipe; do : Nothing; done &)
echo "some data" > pipe
echo "more data" > pipe

An alternative is to keep some process with the FIFO open.

mkfifo pipe
sleep 10000 > pipe &
cat pipe &
echo "some data" > pipe
echo "more data" > pipe
Jonathan Leffler
  • 730,956
  • 141
  • 904
  • 1,278
  • 2
    The second version does an exellent job! The first one doesn't work for me because I don't want to restart the process that receives data. – user1084871 Dec 07 '11 at 16:10
  • 10
    You might be able to cheat and have the `cat` hold the pipe open for writing by using: `cat pipe 3>pipe`. The `cat` command won't use file descriptor 3, but will have the FIFO called pipe open for writing (though it will be reading it on another file descriptor - probably number 4). – Jonathan Leffler Dec 07 '11 at 16:14
  • 1
    Does `exec >6 pipe` not achieve the same thing? Basically assigns `pipe` to file-descriptor 6 and holds it open for writing. Instead of writing to `pipe` directly you'd probably want to write to that descriptor using `>&6` but otherwise it should hold it open iirc – Haravikk Feb 17 '14 at 19:51
  • @Haravikk: No, using `exec >6 pipe` wouldn't work, even when the syntax is corrected to `exec 6> pipe`. The trouble is that the process would hang waiting for some other process to open the pipe for reading, and the only process that was planning to do that is the one that's blocked. – Jonathan Leffler Jul 21 '14 at 06:48
  • 8
    Protip: use `tail -f` instead of the version with `sleep`. – danr Dec 29 '16 at 01:58
  • @danr kudos for the simpler version that just does what is suppored to do so! – mitsos1os Dec 18 '20 at 10:31
27

You can solve this very easily by opening the read side of the pipe in read-write mode. The reader only gets an EOF once the last writer closes. So opening it in read-write makes sure there is always at least one writer.

So change your second example to:

mkfifo pipe
cat <>pipe &
echo "some data" >pipe
phemmer
  • 6,882
  • 3
  • 33
  • 31
  • 1
    With this method I can't work out how to close the pipe, I can only kill the cat process which might not behave the same way. For example, if cat was actually an awk program with an END block, the END block will not be executed when it receives a SIGTERM. – pix Jul 06 '17 at 05:08
  • 1
    @pix your use case isn't quite the same as the original question. But as mentioned in the answer, "The reader only gets an EOF once the last writer closes", so ensure there's always a writer. For example `exec 3>pipe` to have the shell hold it open. Or `sleep inf >pipe &` to launch a separate process if you want it to persist after the shell exits. – phemmer Jun 01 '18 at 12:51
  • "The reader only gets an EOF once the last writer closes." - but since your reader is also a writer, the "last writer closes" is a condition you'll never reach since your reader isn't exiting until it gets the EOF that only it's exit will produce. – Dev Null Oct 10 '18 at 18:08
3

Honestly, the best way I was able to get this to work was by using socat, which basically connections two sockets.

mkfifo foo
socat $PWD/foo /dev/tty

Now in a new term, you can:

echo "I am in your term!" > foo
# also (surprisingly) this works
clear > foo

The downside is you need socat, which isn't a basic util everyone gets. The plus side is, I can't find something that doesn't work... I am able to print colors, tee to the fifo, clear the screen, etc. It is as if you slave the whole terminal.

Jordan
  • 221
  • 2
  • 6
2

I enhanced the second version from the Jonathan Leffler's answer to support closing the pipe:

dir=`mktemp -d /tmp/temp.XXX`
keep_pipe_open=$dir/keep_pipe_open
pipe=$dir/pipe

mkfifo $pipe
touch $keep_pipe_open

# Read from pipe:
cat < $pipe &

# Keep the pipe open:
while [ -f $keep_pipe_open ]; do sleep 1; done > $pipe &

# Write to pipe:
for i in {1..10}; do
  echo $i > $pipe
done

# close the pipe:
rm $keep_pipe_open
wait

rm -rf $dir
silyevsk
  • 4,021
  • 3
  • 31
  • 30
1

As an alternative to the other solutions here, you can call cat in a loop as the input to your command:

mkfifo pipe
(while true ; do cat pipe ; done) | bash

Now you can feed it commands one at a time and it won't close:

echo "'echo hi'" > pipe
echo "'echo bye'" > pipe

You'll have to kill the process when you want it gone, of course. I think this is the most convenient solution since it lets you specify the non-exiting behavior as you create the process.

Adam Dingle
  • 3,114
  • 1
  • 16
  • 14