94

I have a multithreaded function that I would like a status bar for using tqdm. Is there an easy way to show a status bar with ThreadPoolExecutor? It is the parallelization part that is confusing me.

import concurrent.futures

def f(x):
    return f**2

my_iter = range(1000000)

def run(f,my_iter):
    with concurrent.futures.ThreadPoolExecutor() as executor:
        function = list(executor.map(f, my_iter))
    return results

run(f, my_iter) # wrap tqdr around this function?
max
  • 4,141
  • 5
  • 26
  • 55
  • 2
    you can use `from tqdm.contrib.concurrent import process_map` see https://stackoverflow.com/questions/41920124/multiprocessing-use-tqdm-to-display-a-progress-bar/59905309#59905309 – dina Mar 15 '21 at 06:46

5 Answers5

128

You can wrap tqdm around the executor as the following to track the progress:

list(tqdm(executor.map(f, iter), total=len(iter))

Here is your example:

import time  
import concurrent.futures
from tqdm import tqdm

def f(x):
    time.sleep(0.001)  # to visualize the progress
    return x**2

def run(f, my_iter):
    with concurrent.futures.ThreadPoolExecutor() as executor:
        results = list(tqdm(executor.map(f, my_iter), total=len(my_iter)))
    return results

my_iter = range(100000)
run(f, my_iter)

And the result is like this:

16%|██▏           | 15707/100000 [00:00<00:02, 31312.54it/s]
Dat
  • 5,405
  • 2
  • 31
  • 32
  • 2
    Thank you! The key seems to be the list() around tqdm, why is that the case? – dreamflasher Jan 12 '20 at 09:14
  • 4
    @DreamFlasher: That behavior is because tqdm runs on execution. Executor.map itself is only a generator. – R4h4 Jan 15 '20 at 08:00
  • 4
    Like that, you will not get the output instantly ! so you've to wait until the full progress completed till you see the full result ! – αԋɱҽԃ αмєяιcαη Jun 05 '20 at 22:34
  • 1
    the `total` argument in tqdm is important. Without it, we can not see the overall progress. – jdhao Sep 03 '20 at 07:42
  • This blocks time updates in the progress bar, is there a way to fix it? – Miguel Pinheiro Oct 22 '21 at 17:40
  • Just call update(0) to update the time – Miguel Pinheiro Oct 22 '21 at 18:12
  • Doesn't work for me. No progress bar shows at all. I have to put `tqdm` into the function I'm running which results in one progress bar for each process. – Sean Oct 21 '22 at 02:29
  • 1
    To get ordered results as they come in (and update the tqdm accordingly), use `multiprocessing.pool.ThreadPool.imap` instead of `concurrent.futures.ThreadPoolExecutor.map` (which has some [caveats](https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.Executor.map)). – ddelange Dec 16 '22 at 07:25
73

The problem with the accepted answer is that the ThreadPoolExecutor.map function is obliged to generate results not in the order that they become available. So if the first invocation of myfunc happens to be, for example, the last one to complete, the progress bar will go from 0% to 100% all at once and only when all of the calls have completed. Much better would be to use ThreadPoolExecutor.submit with as_completed:

import time
import concurrent.futures
from tqdm import tqdm

def f(x):
    time.sleep(0.001)  # to visualize the progress
    return x**2

def run(f, my_iter):
    l = len(my_iter)
    with tqdm(total=l) as pbar:
        # let's give it some more threads:
        with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
            futures = {executor.submit(f, arg): arg for arg in my_iter}
            results = {}
            for future in concurrent.futures.as_completed(futures):
                arg = futures[future]
                results[arg] = future.result()
                pbar.update(1)
    print(321, results[321])

my_iter = range(100000)
run(f, my_iter)

Prints:

321 103041

This is just the general idea. Depending upon the type of my_iter, it may not be possible to directly take apply the len function directly to it without first converting it into a list. The main point is to use submit with as_completed.

Booboo
  • 38,656
  • 3
  • 37
  • 60
  • Thanks! This really helped but out of some reason the progress bar stopped after a while? – shkelda Nov 01 '20 at 17:13
  • 1
    Just wanted to mention that with minor modifications (move to `def main()`) this works just as well with the [`ProcessPoolExecutor`](https://docs.python.org/3/library/concurrent.futures.html#processpoolexecutor), which can be much faster if `f(x)` actually does computation since it is not affected by the global interpreter lock. – leopold.talirz Nov 10 '20 at 00:02
  • 4
    Since someone just asked me, here is the code of the example adapted for the `ProcessPoolExecutor` https://gist.github.com/ltalirz/9220946c5c9fd920a1a2d81ce7375c47 – leopold.talirz Jul 20 '21 at 16:23
  • @leopold.talirz Of course, if it weren't for the call to `sleep` that was added solely to "visualize the result", function `f` is really a poor candidate even for multiprocessing since it is not CPU-intensive enough to justify the added overhead (that is, just calling `f` in a loop would be faster). The real point of the question as I understood was really about how to update the progress bar. But for what it's worth, with the call to `sleep`, multithreading does better than multiprocessing with *this particular f function* due to its reduced overhead. – Booboo Jul 20 '21 at 17:42
  • 2
    This blocks time updates in the progress bar, is there a way to fix it? – Miguel Pinheiro Oct 22 '21 at 17:40
  • Just call update(0) to update the time – Miguel Pinheiro Oct 22 '21 at 18:12
  • This is a good solution. However, I have a problem where the progress bar does not update if the above `run` function is executed more than once in the same jupyter notebook instance. Can anyone else confirm this problem and/or offer a solution? – Brian Pollack Feb 16 '22 at 15:06
6

Most short way, i think:

with ThreadPoolExecutor(max_workers=20) as executor:
    results = list(tqdm(executor.map(myfunc, range(len(my_array))), total=len(my_array)))
user2988257
  • 882
  • 4
  • 25
  • 43
3

tried the example but progress bar fails still, and I find this post, seems useful in short way to use:

def tqdm_parallel_map(fn, *iterables):
    """ use tqdm to show progress"""
    executor = concurrent.futures.ProcessPoolExecutor()
    futures_list = []
    for iterable in iterables:
        futures_list += [executor.submit(fn, i) for i in iterable]
    for f in tqdm(concurrent.futures.as_completed(futures_list), total=len(futures_list)):
        yield f.result()


def multi_cpu_dispatcher_process_tqdm(data_list, single_job_fn):
    """ multi cpu dispatcher """
    output = []
    for result in tqdm_parallel_map(single_job_fn, data_list):
        output += result
    return output
butter
  • 355
  • 1
  • 4
  • 12
1

I find more intuitive to use the update() method of tqdm, we keep an human readable structure:

with tqdm(total=len(mylist)) as progress:                         
    with ThreadPoolExecutor() as executor:
        for __ in executor.map(fun, mylist):
            progress.update() # We update the progress bar each time that a job finish

Since I don't care about the output of fun I use __ as throwaway variable.

obchardon
  • 10,614
  • 1
  • 17
  • 33