0

I have several python processes which monitor and act upon physical IO. E.g. shut down a motor if the current is too high. They need to let each other know why they have done something so I thought a shared file might be a simple solution. The various processes can write to this file and the others need to know when it has been written to. I'm already using ConfigObj for static configuration files so I thought I'd give it a try for dynamic files. The writes shouldn't occur very often, perhaps one per second at most and usually much slower than that. I came up with this example which seems to work.

import copy
import os.path
import threading
import time
from configobj import ConfigObj

class config_watcher(threading.Thread):
    def __init__(self,watched_items):
        self.watched_items = watched_items
        self.config = self.watched_items['config'] 
        super(config_watcher,self).__init__()
    def run(self):
        self.reload_config()
        while 1:
            # First look for external changes
            if self.watched_items['mtime'] <> os.path.getmtime(self.config.filename):
                print "external chage detected"
                self.reload_config()
            # Now look for external changes
            if self.watched_items['config'] <> self.watched_items['copy']: 
                print "internal chage detected"
                self.save_config()
            time.sleep(.1)
    def reload_config(self):
        try:
            self.config.reload()
        except Exception:
            pass
        self.watched_items['mtime'] = os.path.getmtime(self.config.filename)
        self.watched_items['copy'] = copy.deepcopy(self.config)
    def save_config(self):
        self.config.write()
        self.reload_config()

if __name__ == '__main__':
    from random import randint
    config_file = 'test.txt'
    openfile = open(config_file, 'w')
    openfile.write('x = 0 # comment\r\n')
    openfile.close()
    config = ConfigObj(config_file)
    watched_config = {'config':config} #Dictionary to pass to thread
    config_watcher = config_watcher(watched_config) #Start thread
    config_watcher.setDaemon(True) # and make it a daemon so we can exit on ctrl-c
    config_watcher.start()
    time.sleep(.1) # Let the daemon get going
    while 1:
        newval = randint(0,9)
        print "is:{0} was:{1}, altering dictionary".format(newval,config['x'])
        config['x'] = newval
        time.sleep(1)
        openfile = open(config.filename, 'w')
        openfile.write('x = {0} # external write\r\n'.format(randint(10,19)))
        openfile.close()
        time.sleep(1)
        print "is {1} was:{0}".format(newval,config['x'])
        time.sleep(1)

My question is if there a better/easier/cleaner way of doing this?

RyanN
  • 740
  • 8
  • 20

2 Answers2

2

Your approach is vulnerable to race conditions if you have multiple processes trying to monitor and update the same files.

I would tend to use SQLite for this, making a timestamped "log" table to record the messages. The "monitor" thread can just check the max timestamp or integer key value. Some would say this is overkill, I know, but I'm sure that once you have a shared database in the system you will find some other clever uses for it.

As a bonus, you get auditability; the history of changes can be recorded in the table.

Bill Gribble
  • 1,797
  • 12
  • 15
  • I do see how I could get a race condition. I'm hoping that since this is based on physical IO, that this might be avoided. I do like the idea of using a database, but using python with databases is outside of my experience and my database experience was primarily with dbase4. I'll have to look in to it. – RyanN Jan 19 '12 at 14:21
  • SQLite is pretty easy, and stores its database in a single file which you can poke around in with a command-line client. Python's interface is called "DB-API" when you are looking for documentation. There is a "pysqlite" package that implements it for SQLite3. – Bill Gribble Jan 19 '12 at 15:57
  • I ended up using a database after all. We're already using postgres in the project, so I used that. Still have to poll() and NOTIFY, but overall it is a lot cleaner. – RyanN Jan 27 '12 at 20:24
1

For the use case you describe, i.e.: multiple processes isolation and communication, you should seriously consider Redis as an alternative to a SQL RDBMS.

Quoting Ofer Bengal, CEO of Redis Labs,

"Redis is a key value database and [...] a data-structured engine"

In short,

  • It is very flexible, thanks to its general approach data management commands.
  • It supports transactions.
  • It is lightweight and fast.
  • You can configure it as read-only (no writes on disk), if necessary.
  • It is stable and available for multiple computing platforms.

For more information about Redis transactions, you can check:

jose.angel.jimenez
  • 2,127
  • 23
  • 17