My problem seems to be a simple one, but so far I haven't found a satisfactory answer. The code I am running is very time consuming, and I need to run it many times (ideally 100 times or more) and average the results from each trial. I have been told to try multiprocessing
, and I have made some progress (in JupyterLab).
#my_code.py
def Run_Code():
<code>
return result
import multiprocessing as mp
import numpy as np
import my_code as mc
trial_amount = 2
if __name__ == '__main__':
pool = mp.Pool(2)
result = pool.map(mc.Run_code, np.arange(trial_amount))
print(result)
I was guided by this introduction (https://sebastianraschka.com/Articles/2014_multiprocessing.html#sections). The ultimate goal is to just run each trial simultaneously (or as many as possible at once and once it finished start another trial, and so on) and put the results in a list that will then be averaged. I tried this and it continued running for hours, much longer than a single trial, and never finished.