2

I am trying to work through securing my scripts from parallel execution by incorporating flock. I have read a number of threads here and came across a reference to this: http://www.kfirlavi.com/blog/2012/11/06/elegant-locking-of-bash-program/ which incorporates many of the examples presented in the other threads.

My scripts will eventually run on Ubuntu (>14), OS X 10.7 and 10.11.4. I am mainly testing on OS X 10.11.4 and have installed flock via homebrew.

When I run the script below, locks are being created but I think I am forking the subscripts and it is these scripts I am trying to ensure are not running more than one instance each.

#!/bin/bash
#----------------------------------------------------------------

set -vx
set -euo pipefail
set -o errexit
IFS=$'\n\t'

readonly PROGNAME=$(basename "$0")
readonly LOCKFILE_DIR=/tmp
readonly LOCK_FD=200
subprocess1="/bash$/subprocess1.sh"
subprocess2="/bash$/subprocess2.sh"

lock() {
    local prefix=$1
    local fd=${2:-$LOCK_FD}
    local lock_file=$LOCKFILE_DIR/$prefix.lock

    # create lock file
    eval "exec $fd>$lock_file"

    # acquier the lock
    flock -n $fd \
        && return 0 \
        || return 1
}

eexit() {
    local error_str="$@"

    echo $error_str
    exit 1
}

main() {
    lock $PROGNAME \
        || eexit "Only one instance of $PROGNAME can run at one time."



    ##My child scripts
    sh "$subprocess1"  #wait for it to finish then run 

    sh "$subprocess2"

    }
main

$subprocess1 is a script that loads ncftpget and logs into a remote server to grab some files. Once finished, the connection closes. I want to subprocess1 every 15 minutes via cron. I have done so with success, but sometimes there are many files to grab and the job takes longer than 15 minutes. It is rare, but it does happen. In such a case, I want to ensure a second instance of $subprocess1 can't be started. For clarity a small example of such a subscript is:

#!/bin/bash
remoteftp="someftp.ftp"
ncftplog="somelog.log"
localdir="some/local/dir"
ncftpget -R -T -f "$remoteftp" -d "$ncftplog" "$localdir" "*.files"
EXIT_V="$?"
    case $EXIT_V in
        0) O="Success!";;
        1) O="Could not connect to remote host.";;
        2) O="Could not connect to remote host - timed out.";;
        3) O="Transfer failed.";;
        4) O="Transfer failed - timed out.";;
        5) O="Directory change failed.";;
        6) O="Directory change failed - timed out.";;
        7) O="Malformed URL.";;
        8) O="Usage error.";;
        9) O="Error in login configuration file.";;
        10) O="Library initialization failed.";;
        11) O="Session initialization failed.";;
    esac

if [ "$EXIT_V" = 0 ]; 
    then
        echo ""$O"
else
        echo "There has been an error: "$O""
        echo "Exiting now..."
        exit
fi
    echo "Goodbye"

and an example of subprocess2 is:

   #!/bin/bash
   ...preamble script setup items etc and then:

   java -jar /some/javaprog.java

When I execute the parent script with "sh lock.sh", it progresses through the script without error and exits. The first issue I have is that if I load up the script again I get an error that indicates only one instance of lock.sh can run. What should I have added in the script that would indicate the processes have not completed yet (rather than merely exiting and giving back the prompt).

However, if subprocess1 was running on its own, lock.sh would load up a second instance of subprocess1 because it was not locked. How would one go about locking child scripts and ideally ensuring that forked processes were taken care of as well? If someone had run subprocess1 at the terminal or there was a runaway instance, if cron loads lock.sh, I would want it to fail when trying to load its instance subprocess1 and subprocess2 and not merely exit if cron tried to load two lock.sh instances.

My main concern is in loading multiple instances of ncftpget that is called by subprocess1 and then further, a third script I hope to incorporate, "subprocess2," which launches a java program that deals with the downloaded files, both ncftpget and the java program can't have parallel processes without breaking many things. But I'm at a loss on how to control them adequately.

I thought I could use something similar to this in the main() function of lock.sh:

 #This is where I try to lock the subscript
    pidfile="${subprocess1}"
    # lock it
    exec 200>$pidfile
    flock -n 200 || exit 1
    pid=$$
    echo $pid 1>&200

but am not sure how to incorporate it.

FocusedEnergy
  • 129
  • 1
  • 12
  • You're on the right track, you just need to include a `wait` into your script perhaps. A more advanced [example](http://stackoverflow.com/questions/36364505/bash-cron-flock-screen/36366663#36366663) using `screen` is shown in this [question](http://stackoverflow.com/questions/36364505/bash-cron-flock-screen/36366663). – l'L'l May 12 '16 at 20:59
  • 1
    thank you for the suggestion! I will read through it carefully and see if I can make it work for me. It does seem that the problem is comparable: "I have forked processes not inheriting the lock". That summarizes my issue perfectly. – FocusedEnergy May 12 '16 at 21:06
  • It looks like I can accomplish what I need to do using the examples from that question. Is there any disadvantage from running `flock -n /tmp/path.to.lockfile -c command with args` as a cronjob vs. placing the flock in the script as I have above? It seems to me that at least for part of my process, since I am running cron anyway, the code right in the cronjob seems like it will work and is more elegant from a minimal code perspective. – FocusedEnergy May 13 '16 at 18:18
  • That's great, and I'm glad the information was useful! As for the cron job method you use I think it's personal preference really. I like the idea of using flock in the crontab since that's essentially the starting point each time your command/script executes and less likely a chance to spawn a zombie process from somewhere. I like keeping the crontab fairly minimal and generally if something is more than a couple commands then I put them in a shell script and have cron call that instead. – l'L'l May 13 '16 at 19:01
  • 1
    Perfect! I will run the simple command with the lock in the crontab and for the more involved script, I will keep it as is. Many thanks! – FocusedEnergy May 13 '16 at 19:07

0 Answers0