0

I'm using S3cmd in a bash script upon startup. If it returns an error code, the script is ready to do something. However, s3cmd seem to (sometimes) break it all when an error occurs, and outputs information on screen. It just exists my script.

How do I prevent a program from breaking my Bash script? If something is wrong, I just want the bash script to keep on doing the next thing in line.

EDIT: It seems this only happens with /etc/rc.local. If I runt the script as something else (/home/whateverscript) it does as I want it to.

Ansgar Wiechers
  • 193,178
  • 25
  • 254
  • 328
Paolo
  • 2,161
  • 5
  • 25
  • 32

1 Answers1

0

May be you can wrap your

s3cmd sync --recursive --delete-removed --config="$HEMMAPPEN/.s3cfg" "$SOURCEFOLDER" "$TARGETFOLDER/"

in the script as below,

outputText=$(s3cmd sync --recursive --delete-removed --config="$HEMMAPPEN/.s3cfg" "$SOURCEFOLDER" "$TARGETFOLDER/" 2>&1;echo ,$?)

This will redirect your stderr to stdout (2>&1) and the variable outputText would contain your desired output (in the form *stdout,exit_status*) of the command, if it's required later in the script context.

If you don't want stdout but only the status of the command in outputText, you can use the following

status=$(s3cmd sync --recursive --delete-removed --config="$HEMMAPPEN/.s3cfg" "$SOURCEFOLDER" "$TARGETFOLDER/" > /dev/null 2>&1;echo $?)

status variable would contain the status of teh command that was run.

I hope it makes sense. Please comment, if it hasn't solved your problem.

SSaikia_JtheRocker
  • 5,053
  • 1
  • 22
  • 41