0

I'm using Python coroutine library gevent and monkey patch to increase the concurrency of http requests. But I noticed the elapsed time of the responses increased while the concurrency increased. Below the sample code:

import gevent
from gevent import monkey
import requests

monkey.patch_all(thread=False)


def action():
    resp = requests.get("https://www.google.com")
    if resp.status_code == 200:
        print resp.elapsed.total_seconds()


jobs = []
for i in range(100):
    jobs.append(gevent.spawn(action))

gevent.joinall(jobs)

When 10 greenlets were spawned, the elapsed time was around 0.9 seconds, but when the greenlets number was increased to 100, the elapsed time was around 1.6 ~ 2.0 seconds. Why this happened?

Roger
  • 13
  • 3

1 Answers1

4

greenlets are still single threaded, meaning they can only do one thing at a time. For any process that is cpu intensive this will incur a delay. Yes it is asynchronous, but it is not multiprocessed, so if something uses 1 second of CPU, you have delayed the results of any subsequent greenlet by 1 second.

So as your queue grows, the delay even if just ms, becomes noticeable.

eatmeimadanish
  • 3,809
  • 1
  • 14
  • 20
  • Who might be the culprit holding the CPU here? All the elapsed time of the responses even the first one increased. The for loop in main routine only takes ~2ms to finish. – Roger May 08 '18 at 01:35
  • Every line of programming takes processing. Adding 100 elements to a queue, all doing something before it circles back and checks the first element again is the route of your problem. Openeing a socket, and hitting a web takes time, once it is in a waiting status, it can continue to the next object. In this way it is Async but still single threaded, it is merely context switching behind the scenes. – eatmeimadanish May 08 '18 at 19:51