20

Consider the following scenario:

a FIFO named test is created. In one terminal window (A) I run cat <test and in another (B) cat >test. It is now possible to write in window B and get the output in window A. It is also possible to terminate the process A and relaunch it and still be able to use this setup as suspected. However if you terminate the process in window B, B will (as far as I know) send an EOF through the FIFO to process A and terminate that as well.

In fact, if you run a process that does not terminate on EOF, you'll still not be able to use your FIFO you redirected to the process. Which I think is because this FIFO is considered closed.

Is there anyway to work around this problem?

The reason to why I ran into this problem is because I'd like to send commands to my minecraft server running in a screen session. For example: echo "command" >FIFO_to_server. This is problably possible to do by using screen by itself but I'm not very comfortable with screen I think a solution using only pipes would be a simpler and cleaner one.

Shump
  • 454
  • 1
  • 4
  • 11

4 Answers4

35

A is reading from a file. When it reaches the end of the file, it stops reading. This is normal behavior, even if the file happens to be a fifo. You now have four approaches.

  1. Change the code of the reader to make it keep reading after the end of the file. That's saying the input file is infinite, and reaching the end of the file is just an illusion. Not practical for you, because you'd have to change the minecraft server code.
  2. Apply unix philosophy. You have a writer and a reader who don't agree on protocol, so you interpose a tool that connects them. As it happens, there is such a tool in the unix toolbox: tail -f. tail -f keeps reading from its input file even after it sees the end of the file. Make all your clients talk to the pipe, and connect tail -f to the minecraft server:

    tail -n +1 -f client_pipe | minecraft_server &
    
  3. As mentioned by jilles, use a trick: pipes support multiple writers, and only become closed when the last writer goes away. So make sure there's a client that never goes away.

    while true; do sleep 999999999; done >client_pipe &
    
  4. The problem is that the server is fundamentally designed to handle a single client. To handle multiple clients, you should change to using a socket. Think of sockets as “meta-pipes”: connecting to a socket creates a pipe, and once the client disconnects, that particular pipe is closed, but the server can accept more connections. This is the clean approach, because it also ensures that you won't have mixed up data if two clients happen to connect at the same time (using pipes, their commands could be interspersed). However, it require changing the minecraft server.

Community
  • 1
  • 1
Gilles 'SO- stop being evil'
  • 104,111
  • 38
  • 209
  • 254
  • 1
    Unfortunately `tail` waits for EOF so it will not pass the content from the pipe line by line. – pabouk - Ukraine stay strong Aug 22 '13 at 16:42
  • 2
    @pabouk Thanks for pointing out this bug: `tail -n 1 -f` would skip input that was available before it started or made available faster than it could read. I meant to write `tail -n +1 -f`, which starts outputting straight away. – Gilles 'SO- stop being evil' Aug 22 '13 at 17:03
  • 2
    Thanks. I have tried it before but `-n +1` did not work as I expected. Now as you confirmed that this is the right way I examined the problem further and realized that the problem is in block buffering (instead of the default line buffering) of the stdout of `tail`. The solution for line by line piping: `stdbuf -oL tail -n +1 -f client_pipe | command` – pabouk - Ukraine stay strong Aug 22 '13 at 17:55
  • 2
    There's a much simpler way to make sure the pipe has at least one writer: Have the reader open it in read-write mode: `minecraft_server <>client_pipe &`. – phemmer Nov 02 '16 at 22:40
8

Start a process that keeps the fifo open for writing and keeps running indefinitely. This will prevent readers from seeing an end-of-file condition.

jilles
  • 10,509
  • 2
  • 26
  • 39
1

From this answer -

On some systems like Linux, <> on a named pipe (FIFO) opens the named pipe without blocking (without waiting for some other process to open the other end), and ensures the pipe structure is left alive. For instance in:

So you could do:

cat <>up_stream >down_stream
# the `cat pipeline keeps running
echo 1 > up_stream  
echo 2 > up_stream
echo 3 > up_stream

However, I can't find documentation about this behavior. So this could be implementation detail which is specific to some systems. I tried the above on MacOS and it works.

KFL
  • 17,162
  • 17
  • 65
  • 89
0

You can add multiple inputs ino a pipe by adding what you require in brackets with semi-colons in your 'mkfifo yourpipe':

(cat file1; cat file2; ls -l;) > yourpipe
Cleptus
  • 3,446
  • 4
  • 28
  • 34