I'm writing a multiprocess bash script that runs several processes (continuously), there are some functions I need to launch, every function has a similar form described in man for flock :
function1 () {
#.... preamble, variable initialization
(
flock -x -w 10 200 || exit 1
# ... my commands ....
) 200>/var/lock/.`basename $0`$conf"-rtrr.lock"
}
Each function has its own fd (number) different from other, every function is independent from others .
The desired behavior is that I should be able to run the same function several times in parallel, the only condition is that only one instance described by function and its parameters can run in every moment. It is not a problem if a function execution fails, the parent runs in a inifinite loop and launch it again. For example if we consider this a list of running processes/functions :
OK
function1 a
function1 b
function2 a
function3 b
NO:
function1 a
function1 a
function2 b
function3 b
On every part I specify different lockfile names using something like :
/var/lock/${program_name}-${parameter}-${function_name}.lock
example lockfile , if called function1 with a :
/var/lock/program-a-function1.lock
The questions :
- using same fd number across several processes, the ones launches the same function, do I risk that one child process overwrite the fd mapping of another child ? The risk is that a process may wait for a wrong lock.
- Can I use a variable as fd ? For example a number which is a sort of hash of parameter ?
- Otherwise, is there a way to not use fd while using flock command ?
- Do you think, for this desired behavior, it is better to use simple files to get and release locks : ieg creating file when acquiring, deleting file when releasing and having an if on top to check lock file presence ?