1

In order to backup Puppet on a daily basis, I've wrote the following script:

#!/bin/bash
today=$(date -I)
todaytime=$(date +'%H:%M')
log="/var/log/puppet_backup.log"
echo "Backup process started.... $todaytime $today" >>$log
cd /etc && tar cvzf - puppet | split --bytes=300MB - puppet.tar.gz. 2>&1>>$log
cd /var/lib && tar zcf var_lib_puppet.tar.gz puppet 2>&1>>$log
mkdir /system_backup/puppet/$today 2>&1>>$log
mv -v /etc/puppet.ta* /system_backup/puppet/$today/ 2>&1>>$log
mv -v /var/lib/var_lib_puppet.ta* /system_backup/puppet/$today/ 2>&1>>$log
echo "Backup process finished.... $todaytime $today" >> $log

When I run this script manually, it successfully compresses Puppet files, creates about 10 compressed files and moves them properly to the target backup location.

This is how it looks after a successful (manual) run:

[root@puppet 2015-11-09]# ll
total 3377647
-rw-rw-r-- 1 root root 300000000 2015-11-09 13:15 puppet.tar.gz.aa
-rw-rw-r-- 1 root root 300000000 2015-11-09 13:16 puppet.tar.gz.ab
-rw-rw-r-- 1 root root 300000000 2015-11-09 13:16 puppet.tar.gz.ac
-rw-rw-r-- 1 root root 300000000 2015-11-09 13:16 puppet.tar.gz.ad
-rw-rw-r-- 1 root root 300000000 2015-11-09 13:16 puppet.tar.gz.ae
-rw-rw-r-- 1 root root 300000000 2015-11-09 13:16 puppet.tar.gz.af
-rw-rw-r-- 1 root root 300000000 2015-11-09 13:17 puppet.tar.gz.ag
-rw-rw-r-- 1 root root 300000000 2015-11-09 13:17 puppet.tar.gz.ah
-rw-rw-r-- 1 root root 177181611 2015-11-09 13:17 puppet.tar.gz.ai
-rw-rw-r-- 1 root root   7308901 2015-11-09 13:17 var_lib_puppet.tar.gz
[root@puppet 2015-11-09]#

But when crontab runs this script, only 3 compressed files are created and moved to the backup target directory, this is how it looks:

[root@puppet 2015-11-15]# ll
total 1031678
-rw-r--r-- 1 root root 300000000 2015-11-15 04:59 puppet.tar.gz.aa
-rw-r--r-- 1 root root 300000000 2015-11-15 04:59 puppet.tar.gz.ab
-rw-r--r-- 1 root root 182204224 2015-11-15 04:59 puppet.tar.gz.ac
-rw-r--r-- 1 root root   7271603 2015-11-15 04:59 var_lib_puppet.tar.gz
[root@puppet 2015-11-15]#

You can probably guess that having only 3 files (instead of 9) means that the compress process doesn't finish properly and thus I cannot restore the files using these 3 compressed files.

This is what the log shows:

Backup process started.... 04:59 2015-11-15
`/etc/puppet.tar.gz.aa' -> `/system_backup/puppet/2015-11-15/puppet.tar.gz.aa'
removed `/etc/puppet.tar.gz.aa'
`/etc/puppet.tar.gz.ab' -> `/system_backup/puppet/2015-11-15/puppet.tar.gz.ab'
removed `/etc/puppet.tar.gz.ab'
`/etc/puppet.tar.gz.ac' -> `/system_backup/puppet/2015-11-15/puppet.tar.gz.ac'
removed `/etc/puppet.tar.gz.ac'
`/var/lib/var_lib_puppet.tar.gz' -> `/system_backup/puppet/2015-11-15/var_lib_puppet.tar.gz'
removed `/var/lib/var_lib_puppet.tar.gz'
Backup process finished.... 04:59 2015-11-15

What could be the reason for the difference between running the script manually or through cron?

Cyrus
  • 84,225
  • 14
  • 89
  • 153
Itai Ganot
  • 5,873
  • 18
  • 56
  • 99
  • 1
    Please make it more testable for us by putting in some effort. You appear to be compressing different files across the runs. Debug it yourself first with the exact same files, and use smaller files if at all possible so that you can post a sample to test with. Also, monitor the exit status of all your commands (in particular, set the `pipe_fail` option to avoid ignoring errors). – 4ae1e1 Nov 15 '15 at 09:18
  • 2
    Btw: replace `2>&1>>$log` by `>>$log 2>&1` to see stderr in your logfile too. – Cyrus Nov 15 '15 at 09:19
  • Thanks @Cyrus, just replaced it and will test it again. – Itai Ganot Nov 15 '15 at 09:22
  • Running in cron, your `PATH` may not contain all of the programs used. As suggested, capturing stderr is a first step to finding this sort of problem. – Thomas Dickey Nov 15 '15 at 11:36

2 Answers2

0

I suspect it's the permission issue.

Have you tried to run cron as root?

https://superuser.com/questions/170866/how-to-run-a-cron-job-as-a-specific-user

Community
  • 1
  • 1
TLJ
  • 4,525
  • 2
  • 31
  • 46
  • The script is already configured under root's cron but I've tried editing it to add the root user before the command in the crontab configuration but to no avail. – Itai Ganot Nov 15 '15 at 10:37
  • strange, I'm curious to know too. Did you find any more errors after capture stderr? – TLJ Nov 15 '15 at 14:54
  • @ItaiGanot, it could still be a permission issue if you are running SELinux in enforcing mode. Cron ordinarily provides a different SELinux context than an interactive shell does. If SELinux denies access to a file somewhere in the middle of the run then the `tar` could fail partway through. – John Bollinger Nov 16 '15 at 22:09
0

Have you tried checking the limits (ulimit) in cron?

I noticed that the last file created in the cron run was smaller than the rest, which could imply that there are other limits in effect in cron that does not apply on the command line (cron has a much more restricted environemt than the average command line). Such limits could affect your split command.

Thomas Espe
  • 306
  • 1
  • 6
  • The last piece of the tarball is smaller in both cases. This simply shows that its size is not evenly divisible by 300000000 bytes in either case. That the overall size is smaller in the failure case seems to indicate that `tar` terminates prematurely in that case. I guess that *could* reflect a `ulimit` issue, but I find that doubtful. – John Bollinger Nov 16 '15 at 22:52
  • Could it be related to "pipe size"? – Itai Ganot Nov 17 '15 at 09:12
  • It's hard to tell. I would advise to capture the output from 'ulimit -a' and 'env' in cron and compare that to what you get on the command line. – Thomas Espe Nov 17 '15 at 09:48
  • I've compared both cron `ulimit -a` and root `ulimit -a` and they are exactly the same. – Itai Ganot Nov 17 '15 at 10:21