I have a backup script that is essentially:
acquire_data | gzip -9 | gpg --batch -e -r me@example.com | upload-to-cloud
The problem is if acquire_data
, or gpg
fails, then upload-to-cloud will see the EOF
and happily upload an incomplete backup. As an example gpg
will fail if the filesystem with the user's home directory is, is full.
I want to pipe it, not store to a temporary file, because it's a lot of data that may not fit in the local server's free space.
I might be able to do something like:
set -o pipefail
mkfifo fifo
upload-to-cloud < fifo &
UPLOADER=$!
((acquire_data | gzip -9 | gpg […]) || kill $UPLOADER) > fifo
wait $UPLOADER # since I need the exit status
But I think that has a race condition. It's not guaranteed that the upload-to-cloud
program will receive the signal before it reads an EOF
. And adding a sleep
seems wrong. Really stdin
of upload-to-cloud
need never be closed.
I want upload-to-cloud
to die before it handles the EOF
because then it won't finalize the upload, and the partial upload will be correctly discarded.
There's this similar question, except it talks about killing an earlier part if a later part fails, which is safer since it doesn't have a problem of the race condition.
What's the best way to do this?