2

From Puma's README:

On MRI, there is a Global VM Lock (GVL) that ensures only one thread can run Ruby code at a time. But if you're doing a lot of blocking IO (such as HTTP calls to external APIs like Twitter), Puma still improves MRI's throughput by allowing IO waiting to be done in parallel.

Unfortunately, it does not explain the mechanism of improving MRI's throughput.

I know that MRI would release the GIL when calling system IO, but it is the improvement by MIR instead of Puma.

I wonder how Puma improves blocking IO in parallel.

Any reference would be appreciated.

Weihang Jian
  • 7,826
  • 4
  • 44
  • 55
  • Puma (like iodine) offers a hybrid concurrency model of threads and processes. This means that once Ruby releases the GIL, another worker thread from the server's thread pool can process HTTP requests. – Myst Dec 22 '20 at 00:20

1 Answers1

0

Puma is using the reactor pattern. Since Puma 4.0.0, Puma is using nio4r for the event handling, which means it can utilize native backends like epoll and kqueue (via libev). (On JRuby, nio4r supports java.nio).

dentarg
  • 1,694
  • 2
  • 12
  • 20