1

I run my script with timeout command which sends a SIGTERM after reaching the timeout limit. This is for testing when s3 sync command starts, hence the 30s timeout.

My goal is for aws s3 sync command to gracefully finish current upload, and only then the script should exit and perform any post trap functions like email and cleanup.

0   0   *   *   *   root   timeout 30s /etc/archive/archive.sh"

In my script I have these flags set:

set -o nounset -o errtrace -o xtrace

And the following traps:

trap 'failure ${LINENO} "$BASH_COMMAND"' ERR
trap 'terminated' SIGTERM
trap 'user_interrupt' SIGINT
trap 'email_report && s3_upload_catalog && cleanup_all' EXIT

Here is my functions:

# Called when script errors out
function failure {
  local lineno=$1
  local msg=$2
  report+="Failed at $lineno: $msg\n"
  status="failed"
  exit 1
}


# Called when script is sent SIGTERM, likely from timeout command
function terminated {
  report+="Got SIGTERM, see timeout command in /etc/cron.d/archive_audio\n"
  status="successful"
  timeout_triggered=true
}


# Called when script errors
function user_interrupt {
  report+="Got SIGINT, user terminated script\n"
  status="failed"
  exit 1
}

My script would trigger the trap for SIGTERM which timeout sends after 30 seconds while aws s3 sync command is running in my script (correct!) But, then right away it would trigger the ERR trap. It is as if a SIGTERM always interrupts and makes aws s3 sync command exit with an error. What I want is my script to catch the SIGTERM and finish the s3 command.

I wish my script could just trap the SIGTERM, call a function to set a variable that SIGTERM has been sent and let me finish my current loop:

for folder in $archive_folders; do
  ... some stuff ...
  s3_upload_archive
  if [ "$timeout_triggered" = true ]; then
    exit 0
  fi
done

But no, aws s3 command FAILS right after sigterm, and my scripts exits because it triggered the ERR trap.

So far I put in a workaround, its ugly (|| [ "$timeout_triggered" = true ]:

# Upload to AWS bucket
function s3_upload_archive {
    source="$work_dir/"
    target="s3://$bucket/$prefix/$host/"
    aws s3 sync $source $target --storage-class "$storage_class" \
      || [ "$timeout_triggered" = true ]
}

I've tried to make aws s3 sync.. & into background process and wait, but that does not work, same results:

aws s3 sync $source $target --storage-class "$storage_class" &
wait

I am out of ideas, this does not make sense why aws s3 sync keeps exiting with error everytime it gets a SIGTERM, even though I set a trap to handle that signal in my script.

alexfvolk
  • 1,810
  • 4
  • 20
  • 40

0 Answers0