4

I have a Thread-extending class that is supposed to run only one instance at a time (cross-process). In order to achieve that, I'm trying to use a file lock. Here are bits of my code:

class Scanner(Thread):

  def __init__(self, path):
    Thread.__init__(self)
    self.lock_file = open(os.path.join(config.BASEDIR, "scanner.lock"), 'r+')
    fcntl.lockf(self.lock_file, fcntl.LOCK_EX | fcntl.LOCK_NB)

  # Stuff omitted

  def run(self):
    logging.info("Starting scan on %s" % self.path)

    # More stuff omitted

    fcntl.lockf(self.lock_file, fcntl.LOCK_UN)

I was expecting the lockf call to throw an exception if a Scanner thread was already running and not initialize the object at all. However, I can see this in the terminal:

INFO:root:Starting scan on /home/felix/Music
INFO:root:Starting scan on /home/felix/Music
INFO:root:Scan finished
INFO:root:Scan finished

Which suggests that two Scanner threads are running at the same time, no exception thrown. I'm sure I'm missing something really basic here, but I can't seem to figure out what that is. Can anyone help?

Felix
  • 88,392
  • 43
  • 149
  • 167

3 Answers3

3

Found the solution myself in the end. It was to use fcntl.flock() instead of fcntl.lockf(), with the exact same parameters. Not sure why that made a difference.

Felix
  • 88,392
  • 43
  • 149
  • 167
  • 1
    Glad you figured it out, but that's odd. historically fcntl is the more reliable, though flock() is no longer a fcntl wrapper and it could be due to how python is handling threading. Make sure to accept your own answer! – Brian Roach Mar 20 '11 at 18:20
  • 2
    Odd indeed. I don't see how it would be a problem with Python's threading -- the process in `run()` takes a while (~10 seconds) in which time I can easily start another 6-7 threads doing the same thing, when using the `lockf()` call. However, if I don't use the `LOCK_NB` flag it does wait for the lock to be released by the other thread. I will accept my own answer, when SO will let me :) – Felix Mar 21 '11 at 11:18
  • I ran into the same problem you did. It would not work correctly with `lockf()` but changing to `flock()` worked. I'm using Python 2.7.11 on Ubuntu 14.04 (running in Docker). – phansen Jul 13 '16 at 21:33
2

You're opening the lock file using r+ which is erasing the previous file and creating a new one. Each thread is locking a different file.

Use w or r+a

Brian Roach
  • 76,169
  • 12
  • 136
  • 161
  • Are you closing that file somewhere? Basically, you're either somehow opening two different files, or you're releasing the lock somewhere. Your call to `fcntl.lockf` is correct syntactically and it should be doing what you expect. You might want to post your complete code. – Brian Roach Mar 20 '11 at 17:49
  • I'm not closing the file anywhere, and I'm holding it in an instance variable, so it's not garbage collected. I think the problem with `lockf()` is that it locks **a part of the file**, not the whole file itself (although without parameters, that's what it should do), and gets confused when you give it an empty file. I haven't tried to prove this theory (by writing something in the file). If I will, I'll update my answer. – Felix Mar 21 '11 at 11:13
0

Along with using flock, I had to also open the file like so :

fd = os.open(lockfile, os.O_CREAT | os.O_TRUNC | os.O_WRONLY)

It does not work other wise.

rags
  • 445
  • 5
  • 7