I guess my understanding toward Celluloid Pool is sort of broken. I will try to explain below but before that a quick note.
Note: Our system is running against a very fast client
passing messages over ZeroMQ.
With the following Vanilla Celluloid app
class VanillaClient include Celluloid::ZMQ def read loop { async.evaluate_response(socket.read_multipart) end def evaluate_response(data) ## the reason for using defer can be found over here. Celluloid.defer do ExternalService.execute(data) end end end
Our system result in failure after some time, reason 'Can't spawn more thread'
(or something like it)
So we intended to use Celluloid Pool(to avoid the above-mentioned problem ) so that we can limit the number of threads that spawned
My Understanding toward Celluloid Pool is Celluloid Pool maintains a pool of actors for you so that you can distribute your task in parallel.
Hence, I decide to test it, but according to my test cases, it seems to behave serially(i.e thing never get distribute or happen in parallel.)
Example to replicate this.
## Send message `1` to the the_client.rb
## Send message `2` to the the_client.rb
## take message from sender-1 and sender-2 and return it back to receiver.rb
## heads on, the `sleep` is introduced to test/replicate the IO block that happens in the actual code.
## print the message obtained from the_client.rb
If, the sender-2.rb
is run before sender-1.rb
it appears that the pool gets blocked for 20
sec (sleep time in the_client.rb
,can be seen over here) before consuming the data sent by sender-1.rb
It behaves the same in ruby-2.2.2 and under jRuby-9.0.5.0. What could be the possible causes for Pool to act in such manner?