2

From the gevent docs:

The greenlets all run in the same OS thread and are scheduled cooperatively.

From asyncio docs:

This module provides infrastructure for writing single-threaded concurrent code using coroutines. asyncio does provide

Try as I might, I haven't come across any major Python libraries that implement multi-threaded or multi-process coroutines i.e. spreading coroutines across multiple threads so as to increase the number of I/O connections that can be made.

I understand coroutines essentially allow the main thread to pause executing this one I/O bound task and move on to the next I/O bound task, forcing an interrupt only when one of these I/O operations finish and require handling. If that is the case, then distributing I/O tasks across several threads, each of which could be operating on different cores, should obviously increase the number of requests you could make.

Maybe I'm misunderstanding how coroutines work or are meant to work, so my question is in two parts:

  1. Is it possible to even have a coroutine library that operates over multiple threads (possibly on different cores) or multiple processes?

  2. If so, is there such a library?

Akshat Mahajan
  • 9,543
  • 4
  • 35
  • 44
  • 3
    "If that is the case, then distributing I/O tasks across several threads, each of which could be operating on different cores, should obviously increase the number of requests you could make." - no, not really, if you're using asynchronous I/O. More CPU power doesn't do much to help you send network requests faster. – user2357112 Apr 03 '16 at 03:54
  • It's less about sending them faster and more about sending more of them out. More threads, each of which are tasked with sending out multiple requests, should mean more network requests are made overall, right? – Akshat Mahajan Apr 03 '16 at 03:59
  • 1
    @AkshatMahajan: A single CPU can usually saturate the local network's capacity; until you've reached that point, the gains from multithreading are limited. Keeping it single threaded also means no worries about synchronization, since race conditions aren't possible; other code only executes when you explicitly give up control. Outside of Python, the really high performance webservers usually use techniques that allow a single thread to manage and serve on hundreds or thousands of connections simultaneously; more threads just don't help. – ShadowRanger Apr 14 '16 at 04:06
  • @ShadowRanger I see. Do you happen to know what sort of techniques high performance webservers actually use then? – Akshat Mahajan Apr 14 '16 at 04:13
  • @AkshatMahajan: Differs by OS. Windows/AIX/Solaris have [I/O Completion ports](https://en.wikipedia.org/wiki/Input/output_completion_port), the BSDs (and OSX) has [`kqueue`](https://en.wikipedia.org/wiki/Kqueue) while Linux has [`epoll`](https://en.wikipedia.org/wiki/Epoll). Decent overview [here](https://en.wikipedia.org/wiki/Asynchronous_I/O). – ShadowRanger Apr 14 '16 at 04:18
  • @AkshatMahajan: And [a more in-depth exploration](http://www.kegel.com/c10k.html) that also covers stuff like `sendfile` and the like. – ShadowRanger Apr 14 '16 at 04:24

0 Answers0