4

I have a python script that works great when run by itself. Based on a hardcoded input directory it scans for all .mdb files and puts that into a list, then iterates through them all in a for loop. Each iteration involves multiple table restrictions, joins, queries, and more.

The only problem.. it takes about 36 hours to run on the input dataset and while this script will only ever be used for this dataset in this instance, I would like to increase the performance as I often edit field selections, results to include, join methods, etc. I would like to say it takes a long time because my script is inefficient, but any inefficiency would be small as nearly ALL processing time is dedicated to the geoprocessor object.

All I have of relevance in my main script is:

indir = "D:\\basil\\input"
mdblist = createDeepMdbList(indir)
for infile in mdblist:
    processMdb(infile)

It also executes flawlessly when executed sequentially.

I have tried using Parallel Python:

ppservers = ()
job_server = pp.Server(ppservers=ppservers)

inputs = tuple(mdblist)
functions = (preparePointLayer, prepareInterTable, jointInterToPoint,\
          prepareDataTable, exportElemTables, joinDatatoPoint, exportToShapefile)
modules = ("sys", "os", "arcgisscripting", "string", "time")

fn = pp.Template(job_server, processMdb, functions, modules)
jobs = [(input, fn.submit(input)) for input in inputs]

It succeeds to create 8 processes, 8 geoprocessor objects... and then fails.

I have not experimented extensively with the built in Python multithreading tools but was hoping for some guidance to simply spawn up to 8 processes going through the queue represented by the mdblist. At no point would any files be attempted to be written or read by multiple processes at the same time. To make things temporarily simpler I have also removed all my logging tools due to this concern; I have run this script enough times to know that it works except for the 4 files of the input of 4104 that have slightly different data formats.

Advice? Wisdom with trying to multithread Arc Python scripts?

BasilV
  • 131
  • 1
  • 8
  • 1
    Have you tried the built-in `multiprocessing` module instead of parallel python? It should be as simple as `multiprocessing.Pool(); results = pool.map(processMdb, filelist)`... It should work with ArcGIS et al, but I haven't actually tried it. Then again, I don't remember if any of the versions of Arc ship with a new enough version of python to have `multiprocessing`... It only made it into the standard library in python 2.6. – Joe Kington Feb 04 '11 at 03:38
  • Sorry, should have mentioned versions... I am currently using Arc 9.3 which is locked to 2.5. There are some workarounds possible to get 2.6, but reports have not been 100% successful: [link](http://gis.stackexchange.com/questions/2226/can-i-use-python-2-6-with-arcgis-9-3). Arcmap 10 is available to me and ships with 2.6, but it also changes the python module it uses and I have been hesitant to upgrade due to numerous older scripts that I will have to update and test. @Joe – BasilV Feb 04 '11 at 18:16
  • 2
    For what it's worth, there is a backport of multiprocessing for python 2.5: http://code.google.com/p/python-multiprocessing/ I recall it having a few issues that don't exist in the standard library version in 2.6, though... It's worth a try, I suppose... Good luck, regardless! – Joe Kington Feb 04 '11 at 19:00
  • @Joe - The backport works incredibly! No hiccups/bugs that I've found so far. I can finally use more than 12.5% cpu on our 2x4core data processing machine :). This is probably going to tip the scales for Arc 10 to get multiprocessing natively. Thanks for your help and the link. – BasilV Feb 07 '11 at 23:51

2 Answers2

5

Thought I'd share what ended up working for me and my experiences.

Using the backport of the multiprocessing module (code.google.com/p/python-multiprocessing) as per Joe's comment worked well. I had to change a couple things around in my script to deal with local/global variables and logging.

Main script is now:

if __name__ == '__main__':

    indir = r'C:\basil\rs_Rock_and_Sediment\DVD_Data\testdir'
    mdblist = createDeepMdbList(indir)

    processes = 6  # set num procs to use here
    pool = multiprocessing.Pool(processes)

    pool.map(processMdb, mdblist)

Total time went from ~36 hours to ~8 using 6 processes.

Some issues I encountered were that by using separate processes, they address different memory stacks and take global variables out entirely. Queues can be used for this but I have not implemented this so everything is just declared locally.

Furthermore, since pool.map can only take one argument, each iteration must create and then delete the geoprocessor object rather than being able to create 8 gp's and pass an available one to each iteration. Each iteration takes about a minute so the couple seconds to create it is not a big deal, but it adds up. I have not done any concrete tests, but this could actually be good practice as anyone who has worked with Arcgis and python will know that scripts drastically slow down the longer the geoprocessor is active (eg. One of my scripts was used by a co-worker who overloaded the input and time estimates to completion went from 50 hours after 1 hour run time to 350 hours after running overnight to 800 hours after running 2 days... it got cancelled and input restricted).

Hope that helps anyone else looking to multiprocess a large itterable input :). Next step: recursive, multiprocessed appends!

BasilV
  • 131
  • 1
  • 8
-1

I compared the above methods in the same function. the result:

Starting pp with 1 workers
Time elapsed:  4.625 s

Starting pp with 2 workers
Time elapsed:  2.43700003624 s

Starting pp with 4 workers
Time elapsed:  2.42100000381 s

Starting pp with 8 workers
Time elapsed:  2.375 s

Starting pp with 16 workers
Time elapsed:  2.43799996376 s

Starting mul_pool with 1 p
Time elapsed:  5.31299996376 s

Starting mul_pool with 2
Time elapsed:  3.125 s

Starting mul_pool with 4
Time elapsed:  3.56200003624 s

Starting mul_pool with 8
Time elapsed:  4.5 s

Starting mul_pool with 16
Time elapsed:  5.92199993134 s
mrcktz
  • 251
  • 3
  • 16