2

We have a number of cron jobs that collect various stats over a rather large network. Most of these cron jobs don't get monitored very well. I wanted to write a perl script that we could pipe the output of these jobs into, something like this:

5 * * * * collectstats.pl 2>&1 1>/dev/null | scriptwatcher.pl

The idea is that stdout from collectstats.pl is discarded and stderr is piped into scriptwatcher.pl. This second script can then take appropriate action in the case of errors, most likely email me. With a quick test this is working except that in scriptwatcher I need to know the name of the script that is sending it errors. With just that one little piece of information everything I want to do becomes possible.

Also, is it possible to pipe the stdout from collectstats into another script at the same time? ie 2 pipes for the one script?

Cheers

MikeKulls
  • 873
  • 1
  • 10
  • 22
  • 1
    Use the line: `5 * * * * collectstats.pl 2>&1 1>/dev/null | scriptwatcher.pl collectstats.pl`. That is, tell the script watcher which script it is watching. Look up the [`pee`](http://stackoverflow.com/questions/4656927/is-it-possible-to-distribute-stdin-over-parallel-processes/4658717#4658717) command, parallel to `tee` but with processes (hence the `p`) instead of files. – Jonathan Leffler Jan 17 '13 at 04:52
  • @JonathanLeffler If possible I would like to avoid having to duplicate typing collectstats.pl. If nothing else is possible then this is what I will need to do but it would be preferrable to avoid it. – MikeKulls Jan 17 '13 at 04:57
  • 2
    haha, try typing "pee command" into google, not what I expected!! – MikeKulls Jan 17 '13 at 05:01
  • 3
    `cron` can already email you what's sent to STDERR when someting is sent to STDERR, fyi – ikegami Jan 17 '13 at 06:59
  • @ikegami Thanks, that is good to know. I would like to have more control however, such as it sending an sms after x hours. That does make me think that maybe a package is available that handles this sort of thing. – MikeKulls Jan 17 '13 at 21:54
  • @JonathanLeffler so far your answer looks like the best option. It's not ideal but it's not that big an issue either. If scriptwatcher fails then collectstats will keep working. If you write that up as an answer instead of a comment I can do the green tick thing. – MikeKulls Jan 17 '13 at 22:16

2 Answers2

4

Turn it around and do:

5 * * * * scriptwatcher.pl collectstats.pl any extra args

and have your scriptwatcher.pl deal with the redirection and running the script given in its args.

ysth
  • 96,171
  • 6
  • 121
  • 214
  • Thanks ysth, the issue I can see with this though is that if scriptwatcher fails for any reason then ALL cron jobs stop working. That is very very bad especially because it would be my fault. However if it is the other way around then collectstats would still run even if scriptwatcher fails. – MikeKulls Jan 17 '13 at 21:56
  • 1
    the same is true with the pipe to scriptwatcher. in fact, I'd think scriptwatcher running the other job is slightly *more* safe, if you have it execute the job writing stderr out to a temp file instead of a pipe. – ysth Jan 17 '13 at 22:53
1

Use the line:

5 * * * * collectstats.pl 2>&1 1>/dev/null | scriptwatcher.pl collectstats.pl

That is, tell the script watcher which script it is watching.

Alternatively, as ysth suggests in his answer, have scriptwatcher run the script it is supposed to watch, like nohup and su and so on can run other commands for you. This avoids you having to name the script twice. If repetition is a problem, choose ysth's answer, please.

For your auxilliary question:

is it possible to pipe the stdout from collectstats into another script at the same time?

Look up the pee command, parallel to tee but with processes (hence the p) instead of files. (Be careful: 'pee command' isn't a good Google search term, but the URL I point to is an SO answer with more information about it and direct links to where you can find the code, etc.)

Community
  • 1
  • 1
Jonathan Leffler
  • 730,956
  • 141
  • 904
  • 1,278