You can use the wait()
method of the ApplyResult
object (which is what pool.apply_async
returns).
import multiprocessing
def create_file(i):
open(f'{i}.txt', 'a').close()
if __name__ == '__main__':
# The default for n_processes is the detected number of CPUs
with multiprocessing.Pool() as pool:
# Launch the first round of tasks, building a list of ApplyResult objects
results = [pool.apply_async(create_file, (i,)) for i in range(50)]
# Wait for every task to finish
[result.wait() for result in results]
# {start your next task... the pool is still available}
# {when you reach here, the pool is closed}
This method works even if you're planning on using your pool again and don't want to close it--as an example, you might want to keep it around for the next iteration of your algorithm. Use a with
statement or call pool.close()
manually when you're done using it, or bad things will happen.