Provided main goal is to make the script run(without blocking/ interrupting other tasks when a ConnectionError is raised) anyway even if there is a temporary absence of the Internet connection.
While I was implementing error handling in my script for sending asynchronous HTTP requests, I came across a weird behavior when the ConnectionError exception is raised:
It's as if the Internet connection is absent, greenlet blocks coccurrency of requests, and just a single request is made at time, loosing the simultaneity of tasks.
Here is an example that I hope makes everything clear:
#Python 3.4 Windows 7 64 bit
import gevent.monkey
gevent.monkey.patch_all()
from gevent.pool import Pool
import requests
import time
import os
url = "http://pythonrequest.altervista.org/ten_seconds_delayed.php"
def task(args):
try:
r = requests.get(url)
except requests.exceptions.ConnectionError as e:
print("[ERROR]: ", e)
pool = Pool(10)
start_time = time.time()
for i in range(0,10):
args = []
args.append(i)
args.append(url)
pool.spawn(task, args)
print("Tasks sent to the pool, waiting for completing...")
pool.join()
end_time = time.time()
seconds_elapsed = end_time-start_time
print("Tasks completed")
print("seconds_elapsed: ",seconds_elapsed)
os.system("PAUSE")
I ran the script two times in two conditions.
First time I ran the script with a working internet connection.
Execution time is about 10 seconds.
Second time, I ran the script with an absent internet connection(Windows->Devices->Net Device->Disable).
Execution time comes to 100 seconds around.
Why 100 seconds? Shouldn' t be 10 seconds since the timeout for a single request is 10 seconds?
It seems the execution time is timeout_of_request*number_of_request.
I'm excpeting that in the second case, the execution time is timeout_of_request.
If the requests were simultaneous they should take 10 seconds around to complete, otherwise it means they are not simultaneous?
What I need is to preserve the simultaneity of tasks. Any suggestion?