17

I have a Perl script which forks and daemonizes itself. It's run by cron, so in order to not leave a zombie around, I shut down STDIN,STDOUT, and STDERR:

open STDIN, '/dev/null'   or die "Can't read /dev/null: $!";
open STDOUT, '>>/dev/null' or die "Can't write to /dev/null: $!";
open STDERR, '>>/dev/null' or die "Can't write to /dev/null: $!";
if (!fork()) {
  do_some_fork_stuff();
  }

The question I have is: I'd like to restore at least STDOUT after this point (it would be nice to restore the other 2). But what magic symbols do I need to use to re-open STDOUT as what STDOUT used to be?

I know that I could use "/dev/tty" if I was running from a tty (but I'm running from cron and depending on stdout elsewhere). I've also read tricks where you can put STDOUT aside with open SAVEOUT,">&STDOUT", but just the act of making this copy doesn't solve the original problem of leaving a zombie around.

I'm looking to see if there's some magic like open STDOUT,"|-" (which I know isn't it) to open STDOUT the way it's supposed to be opened.

brian d foy
  • 129,424
  • 31
  • 207
  • 592
Josh
  • 4,412
  • 7
  • 38
  • 41
  • 2
    On a stylistic note: It's better to use three argument open than two argument open. – Leon Timmermans Aug 28 '09 at 19:59
  • 1
    If your process is started by `crond`, then STDOUT is a fifo that `crond` monitors for error messages and then emails to you. If your process _forks away_ from crond, and closes that file descriptor, then crond no longer monitors that fifo, and so there's simply no way to get it back. *You* could arrange to send the mail yourself, if you like. – geocar Aug 28 '09 at 20:00
  • Thanks geocar, that is the answer I wasn't hoping for but will settle for. See my response to jmanning2k below to see what I ended up doing. – Josh Aug 28 '09 at 20:19

3 Answers3

15

# copy of the file descriptors

open(CPERR, ">&STDERR");

# redirect stderr in to warning file

open(STDERR, ">>xyz.log") || die "Error stderr: $!";

# close the redirected filehandles

close(STDERR) || die "Can't close STDERR: $!";

# restore stdout and stderr

open(STDERR, ">&CPERR") || die "Can't restore stderr: $!";

#I hope this works for you.

#-Hariprasad AJ

David G
  • 94,763
  • 41
  • 167
  • 253
Hari
  • 151
  • 1
  • 2
5

If it's still useful, two things come to mind:

  1. You can close STDOUT/STDERR/STDIN in just the child process (i.e. if (!fork()). This will allow the parent to still use them, because they'll still be open there.

  2. I think you can use the simpler close(STDOUT) instead of opening it to /dev/null.

For example:

if (!fork()) {
    close(STDIN) or die "Can't close STDIN: $!\n";
    close(STDOUT) or die "Can't close STDOUT: $!\n";
    close(STDERR) or die "Can't close STDERR: $!\n";
    do_some_fork_stuff();
}
Michael Krebs
  • 7,951
  • 1
  • 21
  • 17
  • 1
    The problem with closing STDOUT instead of reopening is that if you open other files, they might get fd 0,1 or 2 - preventing you from reopening STDOUT in the future. – jmanning2k Sep 10 '09 at 14:34
  • And please see [perl bug when STDIN, STDOUT and STDERR get closed?](http://stackoverflow.com/questions/9321422/perl-bug-when-stdin-stdout-and-stderr-get-closed/9321512) which leads to a [perl bug #23838](https://rt.perl.org/rt3//Public/Bug/Display.html?id=23838). It basically says: "IMHO don't close STDIN and leave it closed re-open /dev/null or tmpfile or something..." – Peter V. Mørch Feb 17 '12 at 01:08
4

Once closed, there's no way to get it back.

Why do you need STDOUT again? To write messages to the console? Use /dev/console for that, or write to syslog with Sys::Syslog.

Honestly though, the other answer is correct. You must save the old stdout (cloned to a new fd) if you want to reopen it later. It does solve the "zombie" problem, since you can then redirect fd 0 (and 1 & 2) to /dev/null.

jmanning2k
  • 9,297
  • 4
  • 31
  • 23
  • Playing around with things, I found that if I used the "SAVEOUT" method, cron would still be waiting. I need stdout again because the script launched by cron already has an output redirect, and I was trying to make minimal changes to it beyond the zombie fix. I did find another solution that works for me though: If I replace do_some_fork_stuff() with a system call to another "wrapper" script, and _that_ script just forks itself and does the fork_stuff(), then I can both detach from cron properly and preserve stdout for the remainder of the main script. (Hopefully that description makes sense) – Josh Aug 28 '09 at 20:17
  • Yikes. Complicated, but if it works... Try exec in the cronjob line to fix cron waiting for your process. Had that in a draft, but must have cut it before posting. – jmanning2k Aug 28 '09 at 20:26
  • Aha! "exec" seems to be exactly what I originally wanted! That's perfect! No changes to the code, at least for my scaled-down test. Will have to test it in the office tomorrow. Thanks! – Josh Aug 28 '09 at 23:19
  • sadly, after trying it (tomorrow meant Monday), "exec" still leaves a zombie around, just lower down the chain. – Josh Aug 31 '09 at 12:11
  • @jmanning2k Why do you say "Once closed, there's no way to get it back" but also "Honestly though, the other answer is correct. You must save the old stdout (cloned to a new fd) if you want to reopen it later." Your statement "Once closed, there's no way to get it back" isn't correct. – jfritz42 Mar 05 '13 at 18:51
  • 1
    @jfritz42 If you reopen STDOUT to /dev/null without saving the original, you cannot reopen the same STDOUT. STDOUT is a write handle and the parent process has an open read filehandle to match. If you close your end, the other side closes. There is no way to get the other side to reopen the connection. Cloning the filehandle before reopening STDOUT as /dev/null keeps an active filehandle. Calling it "close" and "reopen" is probably the inaccurate and confusing part. – jmanning2k Mar 27 '13 at 15:47
  • @jmanning2k Hi, I don't think my last comment was very clear. What I meant to say is that you can save the original, therefore your answer should not start with the statement: "Once closed, there's no way to get it back." – jfritz42 Mar 27 '13 at 22:23