2

This strange interest comes from expanding requirements and no time to change design (refactor). This is not good design, sure, but I need to deal with it now and hope to refactor later.

There are a few log files opened early on which are printed to throughout code. The new requirement implies that with a (new) command-line option (--noflag) one of these log files is irrelevant.

All I could do at the moment is to pad the definition (open my $fh, ...) and all uses of it (print $fh ...) with if $flag. This is clearly bad design and it is error prone (it isn't pretty either).

Is there a way to do something with $fh when it is associated with the file so that any following print $fh ... is accepted by intepreter but will result in simply not running the print, without error? (Let me imagine something like, say, $fh = VOID if $flag;.) Or, is there some NULL stream or such? All I know of are STDOUT (1), STDERR (2), and STDIN (0).

I do not want $fh to print anywhere else, ideally not even to /dev/null (if that is possible?). I did look around and couldn't find anything related. I'd appreciate being pointed to information if in fact it is out there already. Any ideas are appreciated.

PS. First question ever asked here (after years of using SO), please let me know if it's off.


UPDATE

Thanks for responses. They prompt me to add to/refine this question: Are prints marked to go to /dev/null possibly optimized, so that the 'printing' actually doesn't happen? (While I am still interested in whether it is possible to set a filehandle so to tell to Perl 'do not print here'.)

I am trying to avoid running void (print) statements, without adding conditionals.


Update/Clarification

To summarize a bit from comments (thank you!): This was not a quest for performance optimization. I completely agree with everything said in comments on this. It is simply that executing pointless statements (typically around a million) makes me uneasy. Also, I was curious about some possible dark corner of Perl that I haven't run into. (Most of this has been addressed in answers/comments.)

zdim
  • 64,580
  • 5
  • 52
  • 81
  • 2
    Why is /dev/null not good enough? Is the execution of the print itself too costly? Then you need to put a conditional in your code. The target filehandle cannot stop the print from happening (you might redefine print, though...) – Thilo Feb 29 '16 at 01:10
  • @Thilo Well, it is good enough. The execution cost won't hurt. But in principle I'd rather not execute void statements. Also, I was really curious whether it is possible to set a filehandle to something that will tell Perl 'do not print here'. – zdim Feb 29 '16 at 01:13
  • 2
    @zdim: If `/dev/null` is *"good enough"* and *"The execution cost won't hurt"* then you have your answer. If you're creating a temporary hack to get the job done then you can't really afford to be fussy about executing *"void statements"*. Open your file handle to `/dev/null`. This is exactly the sort of case it is intended for – Borodin Feb 29 '16 at 01:47
  • No matter what you do with the file handle 'print $fh do_something_expensive();' is always going to run do_something_expensive(). – Q the Platypus Feb 29 '16 at 03:08
  • @Borodin Yeah, good point. Was hoping that there would be some hidden channel of comminication with Perl (_shh, don't actually print here_). It's a good point, `/dev/null` is for that, and this is a hack. – zdim Feb 29 '16 at 03:41
  • @QthePlatypus Yes, that was the question. Can I somehow set `$fh` to avoid running the print. It's not expensive -- this was not about 'optimization', I should've made that clear. It's just ugly to run a few million pointless statements. – zdim Feb 29 '16 at 03:43
  • @zdim no you can't. Perl uses eager evaluation for the most part so that type of optimization isn't possible. – Q the Platypus Feb 29 '16 at 03:47

4 Answers4

3

If you are on a unix operating system you can use '/dev/null'

open my $fh, '>', '/dev/null' or die 'This should never happen';

Dev null will silently accept all input.

Q the Platypus
  • 825
  • 1
  • 7
  • 13
  • Alright, thank you. I'd rather not print at all, if possible. In my understanding prints to `/dev/null` still go and I'd rather not do all those prints at all, if possible. But this is next best. – zdim Feb 29 '16 at 01:03
  • 4
    @zdim: Don't think about it. Unless you've profiled your code and optimised the most severe bottlenecks then there are probably far worse atrocities in there that you don't know about. This is just premature optimisation – Borodin Feb 29 '16 at 01:54
2

Closing your filehandle

close $fh;

will make all your prints to that file handle fail. Run

no warnings 'closed';

to suppress all the warning messages that would generate (you do use warnings, right?)

mob
  • 117,087
  • 18
  • 149
  • 283
  • Alright, that is interesting :). I do `use warnings;` so much so that dropping them counts as something else. Seriously: There's a lot of other I/O running so this could be dangerous -- it disables it globally, for everything, right? Good idea though, didn't think of that. – zdim Feb 29 '16 at 05:00
1

Through magic, you could create a magical handle for which operations are always successful.

perl -e'
   {
      package Handle::Dummy;
      use Tie::Handle qw( );
      use Symbol qw( gensym );
      our @ISA = qw( Tie::Handle );
      sub new { my $fh = gensym; tie *$fh, $_[0]; $fh }
      sub TIEHANDLE { bless(\my $dummy, $_[0]) }
      sub READ  { return 1; }
      sub WRITE { return 1; }
      sub CLOSE { return 1; }
   }

   my $fh = Handle::Dummy->new();   
   print($fh "abc\n") or die $!;
   close($fh) or die $!;
   print("ok\n");
'
ok

That avoids the systems calls, but it replaces them with expensive Perl subroutine calls.

It's far simpler and more reliable[1] to simply use /dev/null. It could very well be faster too.


Are prints marked to go to /dev/null possibly optimized

No. Perl doesn't know anything about /dev/null.

How slow do you think a system call is? This doesn't sound like the right thing to optimize!


  1. The magical file handle is not associated with a system file handle, so it can't be passed to a C library, it won't survive exec, etc.
ikegami
  • 367,544
  • 15
  • 269
  • 518
  • This pretty much answers it, if this is the only way. This was not at all about optimization, sorry if the question was unclear on that point. I love system calls :). Just wanted to avoid executing a pointless statement a few million times. (The log is big.) It's just ugly. I thought that there may be a way to tell Perl 'no prints here.' It appears that that is not possible. Then it's going to be `/dev/null`. Thank you. This is a piece of magic :). – zdim Feb 29 '16 at 03:37
  • A simpler system-avoiding solution has occurred to me: One could create a PerlIO layer (e.g. see [PerlIO::via](https://perldoc.perl.org/PerlIO/via.html)) that's a sink (silently discards what's sent to it). – ikegami Jul 29 '19 at 05:47
  • Wow, that's really interesting. I had no idea about it -- thank you :). Will try it out :) – zdim Jul 29 '19 at 06:29
  • I found PerlIO::via lacking. I can't remember exactly what the problem was --something about inability to determine how much buffer space is available in the lower layer?-- but it meant that that you basically had to build the whole file in memory before sending it. The C API on which PerlIO::via is based didn't have the problem, so XS code could build a proper layer. But what you could do with PerlIO::via layers was limited. Obviously, not sending data to the lower layer until the handle is closed is not a problem when you are building a sink :) – ikegami Jul 29 '19 at 06:37
  • Yup, easy as pie (for what I need) -- thank you! There's still a method call for each print (even as it does _nothing_, so overhead only), but well OK it's got to do _something_ :). And I need to go through `/proc` to delete the file in `DESTROY` (and can't do that everywhere), but, again, the whole issue presumes that a file _does_ get opened; can always blow it away at the end of program. It's cool (for what it is, a pure-Perl hook of sorts). Thanks for warnings, too! – zdim Jul 30 '19 at 06:49
  • Combine with `open \$var`, and no system file handle is open. (Even if there was, what's that about `/proc`?!?) – ikegami Jul 30 '19 at 06:51
  • I mean as I stay with empty file it'd be good to delete it (in `DESTROY`), since if it's opened with this layer it's not needed; however, I don't have a way to get the filename, so it seems :( – zdim Jul 30 '19 at 06:52
  • "_Combine with open `\$var`, ..._" -- ah, nice :) – zdim Jul 30 '19 at 06:55
  • Or override `OPEN`,`FDOPEN` and `SYSOPEN`. – ikegami Jul 30 '19 at 06:58
  • Yes, writing to in-memory (instead of file) takes care of it. I didn't realize from docs that `OPEN` & Co allow me to override those calls ... will try it out, this is fun and I can see even some uses :). I wonder how it compares to tie-ing efficiency wise, will benchmark (should be better?) – zdim Jul 30 '19 at 07:10
-1

You can use an anonymous, temporary file (about a quarter of the way down the perldoc page) like so;

#!/usr/bin/env perl
use strict;
use Getopt::Long;

my $fh;
my $need_log = 2;
print "Intitial need_log: $need_log\n";

GetOptions('flag!' => \$need_log);
print "After option processing, need_log: ", $need_log, "\n";

if ($need_log) {
  open($fh, '>', "log.txt") or die "Failed to open log: $!\n";
}
else {
  open($fh, '>', undef);
}

print $fh "Hello World... NOT\n";
exit 0;

Here is a few runs with different use of the --flag option;

User@Ubuntu:~$ ls -l log.txt
ls: cannot access log.txt: No such file or directory
User@Ubuntu:~$ ./nf.pl
Intitial need_log: 2
After option processing, need_log: 2
User@Ubuntu:~$ cat log.txt
Hello World... NOT
User@Ubuntu:~$ rm log.txt
User@Ubuntu:~$
User@Ubuntu:~$
User@Ubuntu:~$ ./nf.pl --flag
Intitial need_log: 2
After option processing, need_log: 1
User@Ubuntu:~$ cat log.txt
Hello World... NOT
User@Ubuntu:~$ rm log.txt
User@Ubuntu:~$
User@Ubuntu:~$
User@Ubuntu:~$ ./nf.pl --noflag
Intitial need_log: 2
After option processing, need_log: 0
User@Ubuntu:~$ cat log.txt
cat: log.txt: No such file or directory
User@Ubuntu:~$

I've initialized the $need_log variable to '2' so that we can tell if it has a 'True' value as a result of the flag option being present (in which case it will have the value 1) or as a result of no mention of the flag option at all (in which case it will have the value 2).

Specifying '--noflag' triggers the else clause which has 'undef' as the third argument which creates the anonymous temporary file. This doesn't perfectly match your question of not writing at all, but if the file is temporary and you're not putting gigabytes in it, this will hopefully suffice.

Marty
  • 2,788
  • 11
  • 17
  • 1
    That's a step in the wrong direction since it not only involves the OS, it writes to disk. – ikegami Feb 29 '16 at 02:57
  • Anon temp files like this are a pain in the ass for systems administrators as they can consume all the disk space and there is no way to see that this is happening. – Q the Platypus Feb 29 '16 at 03:10
  • 1) The OP never stated there was not to be involvement of the OS – Marty Feb 29 '16 at 03:55