2

I'm using IBM's LSF platform to run my code in parallel. At the moment, this entails "manually" breaking the code into a job array; instead of:

for i in range(100):
    x[i] = f(i)

I distribute f over 100 workers, and then "manually" collect all their 100 different results to x.

I'm trying to understand if dask.distributed can be used as a "bridge" between my simulation kernel (an IPython kernel in this case) and the LSF scheduler in a way that distributes and gathers the calculations in an automated fashion.

I couldn't find any documentation on this... any help would be much appreciated!

Adam Haber
  • 683
  • 1
  • 6
  • 8

0 Answers0