I am currently using joblib's Parallel and it is working great but running into issues when I run in the server because the processes are not killed.
I have a function that is called on a list of tuples.
I will use a simplified example with numbers but my actual function is more complicated and thats why multiprocessing is necessary -- I have a tuple that has two numbers. function_to_call
will return the product and the sum of the two numbers
lst_of_tuples = [(10, 5), (0.5, 10), (8, 3)]
from joblib import Parallel, delayed
def multiprocess(lst_of_tuples):
return Parallel(n_jobs=-1)(delayed(function_to_call)(t[0], t[1]) for t in lst_of_tuples)
products, sums = list(map(list, zip(*multiprocess(my_lst))))
The expected result of products is (50, 5, 24) and sums is (15, 10.5, 11).
Is it possible to convert this line of code with multiprocessing pool so I can use pool.close() and pool.join() and kill all processes?