My optimizations take several hours to solve on a high performance server. I want to save solutions to disk then reload them on my laptop into a model instance. This would let me interactively explore the results in a python command line, and develop or test freshly-written export code. The pyomo library has lots of functions to dump data to disk, but doesn't provide a clear solution for loading it back into a different runtime environment.
My ideal solution would be to pickle the instance after it has the solution loaded so it encapsulates the inputs, solution and entire runtime state. Unfortunately, the instance has attached methods that are not pickle-able, and the pyomo team hasn't written custom pickling functions. The advised workaround in a 2015 forum thread is to pickle the results object. The results object doesn't have the solution loaded into it by default in v5.1.1, but another post explains how to address that.
I managed to piece together a solution that works in pyomo v5.1.1, but wonder if there is a better way.
Save
# ...Define abstract model
# ...Load data from input directory and create model instance
# Solve, get a results object that only contains execution metadata
results = opt.solve(instance)
# Load solution data into results object
instance.solutions.store_to(results)
# Archive results: solution & execution metadata
pickle.dump(results, open("results.pickle", "wb"))
Reload
... synchronize the code, inputs directory, and results.pickle file from the server to my laptop.
# ...Define abstract model
# ...Load data from input directory and create model instance
# Load results from pickle: metadata & solution
results = pickle.load(open("results.pickle", "wb"))
# Load solution data into instance object
instance.solutions.load_from(results)