0

I am watching my project dir for the file change and running sync script whenever files change.

Certainly I do not want to run the second synchronization before the first one is done. flock utility seems to be fit for preventing second sync from running as in

fswatch -0 ./myproject | xargs -0 -n 1 "flock /tmp/my.lock ./container_update.sh

However, it just puts the next request to the waiting queue, so if I change 20 small files, twenty synchronization will be run. That can be solved with flock -n that would quit immediately if lock cannot be obtained, but then I will lose changes performed while sync is in progress.

I tried building a naive single slot queue with a new item first requesting a "queue" lock, then proceeding to the "main" lock, leaving the "queue" one free for one more request only. It doesn't help; change requests continue to pile.

fswatch ./myproject | xargs -0 -n 1 "flock -n /tmp/my-queue.lock flock /tmp/my-main.lock flock -u /tmp/my-queue.lock ./container_update.sh"

What would be a way to let only one "next" request to be executed?

P.S. If it matters I am running it on a Mac with this implementation of flock that is supposed to be identical to the Linux one - https://github.com/discoteq/flock

Jonathan Leffler
  • 730,956
  • 141
  • 904
  • 1,278
Artem
  • 776
  • 5
  • 18
  • In your second `xargs` command line, you seem to be missing 3 semicolons (or maybe some other connector, such as `&&`). Is that a bug in your script or in transcription to the question, or in my understanding of what you're trying to do? – Jonathan Leffler Jul 19 '17 at 20:18
  • Also, from whence cometh `fswatch`? It is not standard on a Mac. – Jonathan Leffler Jul 19 '17 at 20:29
  • @JonathanLeffler from here - http://emcrisostomo.github.io/fswatch/ Homebrew knows about it, yet for this particular question source of changes doesn't matter much probably. – Artem Jul 19 '17 at 20:45
  • It seems there is a bug, yes (fixing things as we speak, yet can't figure a proper way). I hope the task is clear at least. – Artem Jul 19 '17 at 20:46
  • Moderately clear. Could part of the problem be GNU `getopt()` permuting arguments to (the first) `flock`, so that the flags for the later incantations are all passed to the first? Use `flock -n /tmp/my-queue.lock -- flock /tmp/my-main.lock -- flock -u /tmp/my-queue.lock -- ./container_update.sh`, perhaps. That's a semi-educated guess. – Jonathan Leffler Jul 19 '17 at 20:57
  • Yes, that's in line of what I was trying to do + variations on launching a new shell for the inner flocking (via bash -c). So far I came to conclusion that flock somehow cannot release the /tmp/my-queue.lock while still running the command under it (it might be by design). So I guess the whole approach with flock could be wrong and I'll need to find some other way to keep only one request waiting in a queue. – Artem Jul 19 '17 at 21:14
  • Why are you worrying about the locks etc? If you use `xargs -0 -n 1 …`, it runs each file name in a separate command, and doesn't run the next until the previous one exits. You should be synchronized automatically unless `container_update.sh` script forks off a background task or something like that. You could perhaps lose events if `fswatch` produces more data than `container_update.sh` can process in the time it takes to fill a 64 KiB pipe buffer. Then `fswatch` would (probably) block and some input events might be lost. The moral of the story is: make sure the script is fast enough! – Jonathan Leffler Jul 19 '17 at 21:47

0 Answers0