I've been looking for a way to use Bash indirection to re-route all outputs (1 (STDOUT), 2 (STDERR), 3, etc.) to named pipes. Here is a script that I wrote to test this theory:
#!/bin/bash
pipe1="/tmp/pipe1"
pipe2="/tmp/pipe2"
pipe3="/tmp/pipe3"
mkfifo "${pipe1}"
mkfifo "${pipe2}"
mkfifo "${pipe3}"
trap "rm -rf ${pipe1} ${pipe2} ${pipe3}" EXIT
printer() {
echo "OUT" >&1
echo "ERR" >&2
echo "WRN" >&3
}
# Usage: mux
mux() {
cat "${pipe1}"
cat "${pipe2}"
cat "${pipe3}"
}
printer 1>"${pipe1}" 2>"${pipe2}" 3>"${pipe3}"
mux
This code seems to be alright, but the terminal hangs indefinitely until it's terminated. As I understand it, pipes are like files in that they have an inode, but rather than writing to disk, they simply write to memory.
That being said, it should be accessible like any other file. I know the script hangs on the line calling the printer function. I have also tested several combinations of subshells and more advanced redirections (namely, redirecting to STDOUT to handle each of the other pipes). Perhaps I am missing a terminator in the named pipe (Whereby it is locked and cannot be accessed by the mux function). If that is the case, how is this achieved?
EDIT After more testing, it appears that the issue only happen when attempting to redirect with multiple pipes. For example:
#!/bin/bash
pipe1="/tmp/pipe1"
mkfifo "${pipe1}"
trap "rm -rf ${pipe1}" EXIT
(exec >"${pipe1}"; echo "Test") &
cat < "${pipe1}"
will work as expected. However, adding STDOUT (for example), will break this, forcing it to hang:
#!/bin/bash
pipe1="/tmp/pipe1"
mkfifo "${pipe1}"
trap "rm -rf ${pipe1}" EXIT
(exec >"${pipe1}" 2>"${pipe2}"; echo "Test"; echo "Test2" >&2) &
cat < "${pipe1}"
cat < "${pipe2}"
More specifically, the code hangs once the exec >"${pipe1}" 2>"${pipe2}
statement executes. I imagine that adding more subshells in certain places will help, but this may become messy/unwieldy. I did learn, however, that the named pipes are meant to bridge data between shells (hence the added subshells and background operator &
).