53

The goal was to make a simple unintrusive wrapper that traces stdin and stdout to stderr:

#!/bin/bash

tee /dev/stderr | ./script.sh | tee /dev/stderr

exit ${PIPESTATUS[1]}

Test script script.sh:

#!/bin/bash

echo asd
sleep 1
exit 4

But when the script exits, it doesn't terminate the wrapper. Possible solution is to end the first tee from the second command of the pipe:

#!/bin/bash

# Second subshell will get the PID of the first one through the pipe.
# It will be able to kill the whole script by killing the first subshell.

# Create a temporary named pipe (it's safe, conflicts will throw an error).
pipe=$(mktemp -u)
if ! mkfifo $pipe; then
    echo "ERROR: debug tracing pipe creation failed." >&2
    exit 1
fi

# Attach it to file descriptor 3.
exec 3<>$pipe

# Unlink the named pipe.
rm $pipe

(echo $BASHPID >&3; tee /dev/stderr) | (./script.sh; r=$?; kill $(head -n1 <&3); exit $r) | tee /dev/stderr

exit ${PIPESTATUS[1]}

That's a lot of code. Is there another way?

Velkan
  • 7,067
  • 6
  • 43
  • 87
  • It's not that much code! – scrowler Sep 20 '15 at 20:48
  • The difficulty is that the first `tee` won't terminate until it either gets EOF on its standard input or gets a SIGPIPE from trying to write to its standard output (the pipe) when there is no process waiting to read. It won't be terminated by the `script.sh` process dying. Fixing that is non-trivial. If I were to go about it, I'd use a 'wrapper' program (analogous to `nohup` or `xargs` or `sudo` — a command which takes another command as arguments and does something more or less appropriate before, during or after the time while the second command is run). It might use threads or processes. – Jonathan Leffler Sep 20 '15 at 22:56

2 Answers2

94

I think that you're looking for the pipefail option. From the bash man page:

pipefail

If set, the return value of a pipeline is the value of the last (rightmost) command to exit with a non-zero status, or zero if all commands in the pipeline exit successfully. This option is disabled by default.

So if you start your wrapper script with

#!/bin/bash

set -e
set -o pipefail

Then the wrapper will exit when any error occurs (set -e) and will set the status of the pipeline in the way that you want.

Ewan Mellor
  • 6,747
  • 1
  • 24
  • 39
  • 1
    Works in `bash` but not necessarily in other shells - you sure you're in the right shell? https://askubuntu.com/a/886540 – Markus Shepherd Jul 27 '17 at 08:39
  • 1
    Anyway it is a dangerous flag, it can made unexpected behaviours – deFreitas Oct 15 '17 at 03:44
  • 5
    use set +o pilefail to restore "default" behaviour – makeroo Jun 21 '18 at 10:28
  • 4
    @deFreitas, not *that* dangerous, or all that unexpected. Someone who doesn't expect `foo` to fail when `foo | head` closes its output pipeline isn't thinking very hard about how pipes work. It's far more predictable than, say, [`set -e`](http://mywiki.wooledge.org/BashFAQ/105). – Charles Duffy Jul 13 '18 at 15:42
  • In general, I'm more surprised when `build.sh | tail -100` "succeeds" even though `build.sh` failed. Well, at least the first time I realized that had happened. – Troy Daniels Jun 27 '23 at 16:13
35

The main issue at hand here is clearly the pipe. In , when executing a command of the form

command1 | command2

and command2 dies or terminates, the pipe which receives the output (/dev/stdout) from command1 becomes broken. The broken pipe, however, does not terminate command1. This will only happen when it tries to write to the broken pipe, upon which it will exit with sigpipe. A simple demonstration of this can be seen in this question.

If you want to avoid this problem, you should make use of process substitution in combination with input redirection. This way, you avoid pipes. The above pipeline is then written as:

command2 < <(command1)

In the case of the OP, this would become:

./script.sh < <(tee /dev/stderr) | tee /dev/stderr

which can also be written as:

./script.sh < <(tee /dev/stderr) > >(tee /dev/stderr)
kvantour
  • 25,269
  • 4
  • 47
  • 72
  • 2
    The only explanation I found and understood of what is happening behind the failed piped command. The alien's notation `cmd < <() > >()` does it's job! In my case `-o pipefail` was not working even with `bash -eu -o pipefail -c "$CMD"`. Thanks a lot! – Paul T. Rawkeen Dec 21 '20 at 15:12