Your iterator, it
, has to produce single values (each value can be "complex", such as a tuple or a list). Right now we have:
>>> it
<itertools.imap object at 0x000000000283DB70>
>>> list(it)
[<itertools.ifilter object at 0x000000000283DC50>, <itertools.ifilter object at 0x000000000283DF98>, <itertools.ifilter object at 0x000000000283DBE0>, <itertools.ifilter object at 0x000000000283DF60>, <itertools.ifilter object at 0x000000000283DB00>, <itertools.ifilter object at 0x000000000283DCC0>, <itertools.ifilter object at 0x000000000283DD30>, <itertools.ifilter object at 0x000000000283DDA0>, <itertools.ifilter object at 0x000000000283DE80>, <itertools.ifilter object at 0x000000000284F080>]
Each iteration of it
would produce another iterator, and that is the cause of your problem.
So you have to "iterate your iterators":
import multiprocessing
from itertools import imap, ifilter
import sys
def test(t):
return 't = ' + str(t) # return value rather than printing
if __name__ == '__main__': # required for Windows
mp_pool = multiprocessing.Pool(multiprocessing.cpu_count())
it = imap(lambda x: ifilter(lambda y: x+y > 10, xrange(10)), xrange(10))
for the_iterator in it:
result = mp_pool.map(test, the_iterator)
print result
mp_pool.close() # needed to ensure all processes terminate
mp_pool.join() # needed to ensure all processes terminate
The results printed, as you have defined it
, is:
[]
[]
['t = 9']
['t = 8', 't = 9']
['t = 7', 't = 8', 't = 9']
['t = 6', 't = 7', 't = 8', 't = 9']
['t = 5', 't = 6', 't = 7', 't = 8', 't = 9']
['t = 4', 't = 5', 't = 6', 't = 7', 't = 8', 't = 9']
['t = 3', 't = 4', 't = 5', 't = 6', 't = 7', 't = 8', 't = 9']
['t = 2', 't = 3', 't = 4', 't = 5', 't = 6', 't = 7', 't = 8', 't = 9']
But if you want to get the most out of multiprocessing (assuming you have enough processors), then you would use map_async
so that all of the jobs can be submitted at once:
import multiprocessing
from itertools import imap, ifilter
import sys
def test(t):
return 't = ' + str(t) # return value rather than printing
if __name__ == '__main__': # required for Windows
mp_pool = multiprocessing.Pool(multiprocessing.cpu_count())
it = imap(lambda x: ifilter(lambda y: x+y > 10, xrange(10)), xrange(10))
results = [mp_pool.map_async(test, the_iterator) for the_iterator in it]
for result in results:
print result.get()
mp_pool.close() # needed to ensure all processes terminate
mp_pool.join() # needed to ensure all processes terminate
Or you might consider using my_pool.imap
, which, unlike, my_pool.map_async
, does not first convert the iterable argument to a list to determine an optimal chunksize
value to use for submitting jobs (read the documentation, which is not great), but by default uses a chunksize
value of 1, which is usually not desirable for very large iterables:
results = [mp_pool.imap(test, the_iterator) for the_iterator in it]
for result in results:
print list(result) # to get a comparable printout as when using map_async
Update: Use multiprocessing to generate lists
import multiprocessing
from itertools import imap, ifilter
import sys
def test(t):
return 't = ' + str(t) # return value rather than printing
def generate_lists(x):
return list(ifilter(lambda y: x+y > 10, xrange(10)))
if __name__ == '__main__': # required for Windows
mp_pool = multiprocessing.Pool(multiprocessing.cpu_count())
lists = mp_pool.imap(generate_lists, xrange(10))
# lists, returned by mp_pool.imap, is an iterable
# as each element of lists becomes available it is passed to test:
results = mp_pool.imap(test, lists)
# as each result becomes available
for result in results:
print result
mp_pool.close() # needed to ensure all processes terminate
Prints:
t = []
t = []
t = [9]
t = [8, 9]
t = [7, 8, 9]
t = [6, 7, 8, 9]
t = [5, 6, 7, 8, 9]
t = [4, 5, 6, 7, 8, 9]
t = [3, 4, 5, 6, 7, 8, 9]
t = [2, 3, 4, 5, 6, 7, 8, 9]