4

I'm now using this in cron to backup all my databases in 1 sql.gz file:

0 0     * * *   root    mysqldump -u root -pPASSWORD --all-databases | gzip > /home/backup/db/`date +\%G-\%m-\%d`_db.sql.gz

I'd like to have one .tar.gz file within whom X other archives for how many databases I have.. Is this possible?

Ward - Trying Codidact
  • 12,899
  • 28
  • 46
  • 59
MultiformeIngegno
  • 1,687
  • 9
  • 26
  • 31
  • Maybe http://sourceforge.net/projects/automysqlbackup/ is something for you. You would only have to zip the whole folder structure this script generates. You can do this with a post-script. – Raffael Luthiger Jul 17 '12 at 18:02

3 Answers3

6

Something like this may work. It is un-tested, but only slightly different from what I am using for backups on my systems.

# define common vars
OPTIONS="--verbose --lock-tables --flush-logs --force --quick --single-transaction"
AUTHFILE="/etc/mysql/rootauth.cnf"
BACKUPDIR="/srv/backup/mysql/"
BACKUPDATE=`date +"%y%m%d%H"`

# create temp folder (this isn't entirely safe, but be sure only root or backup user has 
#                     write access here, you might want to use mktemp)
mkdir ${BACKUPDIR}/tmp/

# get a list of all the databases on the system
DBSQL="SELECT SCHEMA_NAME FROM information_schema.SCHEMATA where SCHEMA_NAME!='information_schema' \
       AND SCHEMA_NAME!='performance_schema' order by SCHEMA_NAME"
DBS=`/usr/bin/mysql --defaults-extra-file=${AUTHFILE} --batch \
                                  --skip-column-names --execute "$DBSQL"`
DBS=`echo $DBS | tr -d '\n' | sed -e "s/ \+/ /g"`

for DB in $DBS; do
  # perform a per-database dump
  BACKUPDIRDB="${BACKUPDIR}/tmp/${DB}"
  mkdir -p ${BACKUPDIRDB}
  /usr/bin/mysqldump --defaults-extra-file=${AUTHFILE} \
       ${OPTIONS} $DB > ${BACKUPDIRDB}/backup_${BACKUPDATE}
done

# create archive of everything
tar -czvf ${BACKUPDIR}/backup_${BACKUPDATE}.tar.gz ${BACKUPDIR}/tmp/ 
#remove temp files
rm -rf ${BACKUPDIR}/tmp/
Zoredache
  • 130,897
  • 41
  • 276
  • 420
  • I'm going to try this and let you know if worked! ;) – MultiformeIngegno Jul 17 '12 at 17:52
  • I created a test.sh file in /srv to test it. I run it via "bash /srv/test.sh" but I got this result: /srv/test.sh: line 16: unexpected EOF while looking for matching ``' /srv/test.sh: line 30: syntax error: unexpected end of file What am I doing wrong? – MultiformeIngegno Jul 17 '12 at 18:01
  • I think I may have lost a backtick (`) on my copy and paste into the editor here. Check like 16 and line 30 and make sure the quotes are all properly balanced and closed. – Zoredache Jul 17 '12 at 18:05
  • I think I found it, I'm testing it again. ;) – MultiformeIngegno Jul 17 '12 at 18:09
  • Could not open required defaults file: /etc/mysql/rootauth.cnf Fatal error in defaults handling. Program aborted tar: Removing leading `/' from member names /srv/backup/mysql//tmp/ I don't have the file /etc/mysql/rootauth.cnf, I have my.cnf, should I replace it with that? – MultiformeIngegno Jul 17 '12 at 18:11
  • the file rootauth.cnf is meant to be a standard mysql config file for storing credentials, which is better then passing them on the command line. See: http://serverfault.com/questions/56341/mysqldump-prompting-for-password-in-shellscript/56345#56345 – Zoredache Jul 17 '12 at 18:16
  • The script is running, seems to work properly!! Will confirm this when it finishes! ;) – MultiformeIngegno Jul 17 '12 at 18:30
  • Yeah!! It works like a charm!! Thanks! :D Really latest request (if possible): how can I modify the script in order to have the databases in the tar files without the parent directories (for example I have now /srv/backup/mysql//temp and then the DBs).. – MultiformeIngegno Jul 17 '12 at 18:46
5

Create a script like this to mysqldump all databases in parallel

DBLIST=`mysql -uroot -pPASSWORD -ANe"SELECT GROUP_CONCAT(schema_name) FROM information_schema.schemata WHERE schema_name NOT IN ('information_schema','performance_schema')" | sed 's/,/ /g'`
MYSQLDUMP_OPTIONS="-uroot -pPASSWORD --single-transaction --routines --triggers"
BACKUP_DEST=/home/backup/db/`date +\%G-\%m-\%d`
mkdir ${BACKUP_DEST}
for DB in `echo "${DBLIST}"`
do
    mysqldump ${MYSQLDUMP_OPTIONS} ${DB} | gzip > ${BACKUP_DEST}/${DB}.sql.gz &
done
wait

Then place this script in the crontab

If there are way too many databases, you could dump 5 at a time like this

DBLIST=`mysql -uroot -pPASSWORD -ANe"SELECT GROUP_CONCAT(schema_name) FROM information_schema.schemata WHERE schema_name NOT IN ('information_schema','performance_schema')" | sed 's/,/ /g'`
MYSQLDUMP_OPTIONS="-uroot -pPASSWORD --single-transaction --routines --triggers"
BACKUP_DEST=/home/backup/db/`date +\%G-\%m-\%d`
mkdir ${BACKUP_DEST}
COMMIT_COUNT=0
COMMIT_LIMIT=5
for DB in `echo "${DBLIST}"`
do
    mysqldump ${MYSQLDUMP_OPTIONS} ${DB} | gzip > ${BACKUP_DEST}/${DB}.sql.gz &
    (( COMMIT_COUNT++ ))
    if [ ${COMMIT_COUNT} -eq ${COMMIT_LIMIT} ]
    then
        COMMIT_COUNT=0
        wait
    fi
done
if [ ${COMMIT_COUNT} -gt 0 ]
then
    wait
fi

You can then add the tar commands to the script

RolandoMySQLDBA
  • 16,544
  • 3
  • 48
  • 84
0

I would say that you could accomplish it by making a script that runs from the cron that performs a dump of each database and then the final operation of the script archives all of the files together into one .tar.gz

So in your instance, you would remove the --all-databases option and put the name of a database there. Then repeat that line for all the databases you have. Then after all the dumps have been made, create a tar with all those files and zip it up. Last but not least, perform any necessary cleanup. Put all of that into a script and then run the script from cron.

Safado
  • 4,786
  • 7
  • 37
  • 54
  • Yeah I thought of doing so, I was looking for a solution to "dump 'em all" without having to add a line for all of them (also because I add new DBs often and I'm worried to not remember to add one of them).. thanks anyway for the reply! ;) – MultiformeIngegno Jul 17 '12 at 17:47