Docker stats inherently takes a little while, a large part of this is waiting for the next value to come through the stream
$ time docker stats 1339f13154aa --no-stream
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
...
real 0m1.556s
user 0m0.020s
sys 0m0.015s
You could reduce the time it takes to execute by running the commands in parralell, as opposed to one at a time.
To achieve this, you could use the wonderful threading or multiprocessing library.
Digital Ocean provides a good tutorial on how to accomplish this with a ThreadPoolExecutor
:
import requests
import concurrent.futures
def get_wiki_page_existence(wiki_page_url, timeout=10):
response = requests.get(url=wiki_page_url, timeout=timeout)
page_status = "unknown"
if response.status_code == 200:
page_status = "exists"
elif response.status_code == 404:
page_status = "does not exist"
return wiki_page_url + " - " + page_status
wiki_page_urls = [
"https://en.wikipedia.org/wiki/Ocean",
"https://en.wikipedia.org/wiki/Island",
"https://en.wikipedia.org/wiki/this_page_does_not_exist",
"https://en.wikipedia.org/wiki/Shark",
]
with concurrent.futures.ThreadPoolExecutor() as executor:
futures = []
for url in wiki_page_urls:
futures.append(executor.submit(get_wiki_page_existence, wiki_page_url=url))
for future in concurrent.futures.as_completed(futures):
print(future.result())