1

cf. FSEvents on OSX, which by default collects FS events over 1 second (timeout configurable) before firing off the event.

This has the benefit of collecting a series of filesystem changes into a single event (so the script won't run more than it needs to), at the cost of latency.

For instance, saving a file in Vim modifies many temp files (it tends to delete a buffer file, update an undo file, and also creates and then erases a test file called 4193) in addition to the file itself. On OSX with a small tool that uses this API such as my fork of fswatch, all of these can get collapsed into one "batch event", whereas with inotifywait -m all the events that I specify come over the stream in separate lines making it not simple to group without external processing.

I'm pretty sure the solution is to just to wrap it and do this processing but I was hoping there was a hidden feature to specify a timeout like the FSEvents allows for.

reevesy
  • 3,452
  • 1
  • 26
  • 23
Steven Lu
  • 41,389
  • 58
  • 210
  • 364

3 Answers3

2

I actually am starting to believe that this sort of thing should not be within the scope of inotify's features.

I haven't quite found the proper solution, but it looks to me like there is some form of elegant way to do it. Here's my starting point (which quits if nothing is seen in a second, i want to have something accumulate stuff over one second)

Currently doing some testing with this. Here's some test scripting I've got working quite well.

group=0
( for val in {1..10}; do echo "$RANDOM/10000" | bc | xargs sleep; echo $val; done ) |  while true; do while read -t 1 line; do echo "read $group $line"; done; ((group++)); done
Steven Lu
  • 41,389
  • 58
  • 210
  • 364
  • 2
    `inotify` doesn't support the feature you describe, you should implement it in application layer by yourself. – zeekvfu Dec 06 '13 at 14:29
0

I implemented https://github.com/bronger/watchdog, which may help people with this use case. “watchdog” allows to accumulate events before firing. Moreover, it bundles equivalent events (e.g. multiple writes to the same file, or deleting a file immediately after changing it). When firing, it calls one of three scripts: “copy” (one file was changed), ”delete” (one file/directory was deleted), or ”bulk_sync” (anything else). The watchdog proceeds with collecting events even while the script is running so that nothing gets lost.

I wrote it for efficient synchronisation of local changes with a remote computer. But I myself also use it for other things by just symlinking all three scripts to the same one.

Torsten Bronger
  • 9,899
  • 7
  • 34
  • 41
0

I had a similar problem recently and wanted to try and stay light on dependencies, so came up with the script below. inotifywait emits all events that happen in your watched directory, but you can format its output. So, I just format the output as the event's Unix timestamp and compare that to a timer to cap how frequently the desired "sync" command runs.

#!/usr/bin/env bash

set -e  # exit on errors

# Batch changes every 15s.
next_allowed_run=$(date +%s)
batch_window_seconds=15
inotifywait \
  --monitor /path/to/folder \
  --recursive \
  --event=create \
  --event=modify \
  --event=attrib \
  --event=delete \
  --event=move \
  --format='%T' \
  --timefmt='%s' |
    while read event_time; do
      # If events arrive before the next allowed command-run, just skip them.
      if [[ $event_time -ge $next_allowed_run ]]; then
        next_allowed_run=$(date --date="${batch_window_seconds}sec" +%s)
        sleep $batch_window_seconds  # Wait for additional changes
        foobobulate /path/to/folder
      fi
    done
John
  • 1