1

Lets say I have the following:

#!/usr/bin/perl
use strict;
use warnings;
use CGI ":standard";

...Snippet...

open (FH, '>', "file.txt") or die ("ERROR:$!");
print FH "something";
close(FH);

As it it cgi on Apache, this cgi script could be called concurrently.

  • How does writing and reading occur when concurrently called?
    • There are no locks or such correct?

What happens if I wanted conditional logic...

  1. wait until lsof shows file is clear
  2. Read from file
  3. concat with text
  4. write to file

I am investigating utilizing lsof for setting up synchronous file locking, but do not want to go down bad path. (Might be better off using SQL).

ikegami
  • 367,544
  • 15
  • 269
  • 518
PaulM
  • 345
  • 2
  • 11

1 Answers1

1
  1. Yes, you should almost certainly use a database for this.

  2. If there's some reason why you really don't want to use a database, then at least use the file locking mechanisms that already exist and don't invent your own. There are plenty of questions (and answers) about this in perlfaq5.

Dave Cross
  • 68,119
  • 3
  • 51
  • 97
  • Regarding retrieving FH from concurrent calls, not with file locking, all calls would be writing whatever they could whenever they could (full print statements, partial print statements)? – PaulM Oct 30 '20 at 16:44
  • 1
    Or, write to a single service that then writes to the file (like syslog does). The file locking mechanism isn't a good solution because there's a chance your process will never get a lock. You'll end up writing a program that is mostly about trying to get a lock instead of whatever you are trying to do. – brian d foy Oct 30 '20 at 20:45
  • 1
    I think POSIX guarantees atomicity for small writes (<= 4KiB?). So that's another option. – ikegami Oct 30 '20 at 20:54