I want to make daily dumps of my production database that I can store in S3. I have created a bash script that works fine locally, but when run on Heroku's postinstall, I get the following error message:
./run-backup.sh: line 21: mysqldump: command not found
as it tries to run the following line:
mysqldump -u $CLEARDB_USER_NAME -h $CLEARDB_SERVER_IP -p$CLEARDB_PASSWORD --databases $DATABASE | gzip -c > "/tmp/db-backups/$BACKUP_FILE_NAME.gz"
So the script works fine, but Heroku can't run it. I am not sure where to go from here?
Here is the script in full:
#!/bin/bash
echo "Beginning database backup"
#install aws-cli
curl https://s3.amazonaws.com/aws-cli/awscli-bundle.zip -o awscli-bundle.zip
unzip awscli-bundle.zip
chmod +x ./awscli-bundle/install
./awscli-bundle/install -i /tmp/aws
# new backup file name
BACKUP_FILE_NAME="$(date +"%Y-%m-%d-%H-%M")-$APP-$DATABASE.sql"
# if directory '/tmp/db-backups' doesn't exist, create it
if [ ! -d "/tmp/db-backups" ]; then
mkdir -p /tmp/db-backups/
fi
# dump the current DB into /tmp/db-backups/<new-file-name>
mysqldump -u $CLEARDB_USER_NAME -h $CLEARDB_SERVER_IP -p$CLEARDB_PASSWORD --databases $DATABASE | gzip -c > "/tmp/db-backups/$BACKUP_FILE_NAME.gz"
# using the aws cli, copy the new backup to our s3 bucket
/tmp/aws/bin/aws s3 cp /tmp/db-backups/$BACKUP_FILE_NAME.gz s3://$S3_BUCKET_PATH/$DATABASE/$BACKUP_FILE_NAME.gz --region $AWS_DEFAULT_REGION
echo "backup $BACKUP_FILE_NAME.gz complete"