3

I'm using Celluloid to create a job processing server. I have a pool of workers that take a task from a beanstalkd queue and process it by using Process.spawn to call a PHP script that does a bunch of work.

Here's how I'm executing the PHP command:

rout, wout = ::IO.pipe
pid = Process.spawn(cmd, :err=> :out, :out => wout)
_, exit_status = Process.wait2(pid)
wout.close
output = rout.readlines.join("\n")

This works "most" of the time. I've done tests with hundreds of jobs and everything processes fine. But when I put it into production, some PHP commands are hanging indefinitely.

If I kill the hung processes and look at the logfile that the PHP command writes, the last log message is any number of seemingly random inconspicuous events (that is, I can't discern any pattern in how far the process gets before it hangs).

The PHP scripts to process jobs have been used in production for months but executed on cron. So the only thing that has changed is that they're being executed from this new job processor instead.

Am I approaching this the wrong way? Is Ruby somehow pausing/sleeping the process or something like that-- am I not reading the output properly and that is blocking it?

--- Edit ---

I switched to using the backtick operator to execute the command (blocking doesn't really matter since the Celluloid Actor is async):

output = `#{cmd}`
pid = $?.pid
exit_status = $?.exitstatus

And so far this is working without issue. How is using the backticks different?

chroder
  • 4,393
  • 2
  • 27
  • 42

1 Answers1

0

I think this is to do with the way Ruby launches sub processes, and subsequently interacts with them.

I don't know a huge amount about it myself, however I have found this to be quite useful in understanding the different ways to spawn sub processes and when to use them.

Having used Celluloid a far bit I don't think this is related to the problem you are having.

Community
  • 1
  • 1
Richard V
  • 73
  • 1
  • 6