1

I have a *.shp file that I've upload and i'm using as part of my model (calculating shortest paths). This is quite a big shape file with thousands of road links and intersections and bridges represented by nodes. I was hoping to speed up the running of behavior space by not loading this map every time, and so created a separate procedure for loading the map and defining link weights etc. In this procedure I have clear-all - reset ticks so everything is effectively wiped if i load a new map. In the setup i define turtle attributes for each run. Between each run I use clear-all-plots and clear-output, and reset-ticks. When i run this model behavior space starts to run slowly after a few setups, even with a table output. However, if i combine the load-map and setup-files together i.e. the map is load for every new behavior space run, then the speed is maintained throughout.

Example - runs slow, but the maps is not reloaded everytime

to-load-map  
  Clear-all
  ... code for loading map
  reset-ticks
end

to-setup-model
  clear-all-plots
  clear-outputs
  ... code for setting up turtle variables
  reset-ticks
end

Example (maintains speed - but has to load map)

To-setup
  clear-all
  ...code for loading map
  ...code for setting up turtle variables
  reset-ticks
end

My question: am i missing something that would help to speed things up while not having to reload the map?

Seth Tisue
  • 29,985
  • 11
  • 82
  • 149
Simon Bush
  • 407
  • 2
  • 9
  • “When i run this model behavior space starts to run slowly after a few setups” — it's not clear to me why that would be. Do you have a hypothesis? – Seth Tisue Jul 03 '14 at 16:07
  • @seth : I notice that an increasingly large amount of memory is used, from 350,000kb when the model is opened and the maps are loaded through to 1,100,000kb at the end of say 2000 runs. this is dependent on simulation length i.e. lower for lower runs. I would have thought the memory allocation would stay relatively constant for each run. I also noticed that table-output file size remains the same for the duration of the simulation. Should this rise - as its written to continually? – Simon Bush Jul 08 '14 at 05:28
  • re: memory, that sounds like a very good clue. The slowness is probably just a side effect of the ever-increasing memory usage — which is abnormal, so you should try to isolate the cause of that. You've disabled spreadsheet-format output, right? See http://stackoverflow.com/q/24397338/86485 – Seth Tisue Jul 08 '14 at 11:57
  • re: the file size of the table file, that sounds to me like you're using an inaccurate method of observing the size, rather than a real issue. – Seth Tisue Jul 08 '14 at 11:57
  • as you pointed out over at http://stackoverflow.com/a/24410462/86485 , perhaps this is https://github.com/NetLogo/NW-Extension/issues/102 – Seth Tisue Jul 08 '14 at 23:53
  • One last thought. Between behavior space runs I no longer clear-turtles. So I still have my original road nodes (5800), highway links (7800), bridges (2000) and 10201 patches. But with each run I make my vehicles die (only 10 at this stage), but this stacks up by run 600. Will this create an increasingly long list of turtles i.e. original turtle, nobody....nobody, new turtle? If so it would explain why the refresh map model still runs out of memory, but slower than the keep map, which has the added issue of the NW extension. If so I guess refresh the lot is my only option. – Simon Bush Jul 09 '14 at 21:45
  • At least as far as I know, there is no bug in NetLogo that makes memory usage proportional to total turtles created since clear-turtles, rather than proportional to number of living turtles. – Seth Tisue Jul 10 '14 at 10:55

1 Answers1

0

Not knowing anything else about your model, I wonder if you essentially have a "memory leak" with lots of information accumulating in global variables that are not getting purged every time by the to-setup-model procedure. Are there perhaps other global variables you can explicitly reinitialize in to-setup-model that might help free up some of this space? For instance, do you have large tables hanging around between runs that only gain more key-value pairs and never get trimmed back down? Just a thought.

I almost always define a clear-most procedure that clears everything out except the big data I don't want to load/compute every time. Unfortunately, that means I must list the variables to initialize out in detail, but I like to free as much as possible between runs to keep things speedy. -- Glenn

theoden
  • 398
  • 2
  • 9