2

I'm trying to get a bash script to run exclusively -- if another instance of the script is already running, then wait for the other instance to finish before starting the new one. I found some references to flock which sounds like it should do what I need, but it doesn't seem to be working the way I expect. I have the following script:

#!/bin/bash

inst=$1
lock=/nobackup/julvr/locks/_tst.lk
exec 200>$lock
flock -x -w30 200 || { echo "$inst: failed flock" && exit 1; }
echo "$inst:got lock"
for i in {1..2}; do
        echo "$inst: $i"
        sleep  1
done
echo "$inst:done script";

And then I run

> flocktest.sh test1 & flocktest.sh test2
[1] 25213
test1:got lock
test1: 1
test2:got lock
test2: 1
test1: 2
test2: 2
test1:done script
test2:done script
[1]+  Done                    flocktest.sh test1

It seems both instances of flocktest are running in parallel... When does flock release its lock? How to make it keep the lock until the script is complete?

(An aside, if I do flock -x -w 20 200, then it complains flock: 20: fcntl: Bad file descriptor..., which seems odd, as the man page seems to imply I can add a --w timeout parameter before the lockfile...)

HardcoreHenry
  • 5,909
  • 2
  • 19
  • 44

2 Answers2

0

flock seems to me very complicated.
Perhaps you can try this way.

cat script_unique.sh 
while test -n "$run_sh"
  do
    sleep 2
  done
  export run_sh="run_sh"
  sleep 2
  echo "$run_sh"
  sleep 4
  echo "$0 $1"
  run_sh=""
ctac_
  • 2,413
  • 2
  • 7
  • 17
  • Thanks, but flock is indeed cleaner. My issue was that I was using an older version of flock, with a different interface than what the man page described. I'm closing this issue. – HardcoreHenry Jan 18 '19 at 19:46
0

Ok, I found my bug -- I was using a really old version of flock which had a different interface than the one described in the man page. I updated to a new version of flock, and it worked.

HardcoreHenry
  • 5,909
  • 2
  • 19
  • 44