I'm using Celluloid to create a job processing server. I have a pool of workers that take a task from a beanstalkd queue and process it by using Process.spawn
to call a PHP script that does a bunch of work.
Here's how I'm executing the PHP command:
rout, wout = ::IO.pipe
pid = Process.spawn(cmd, :err=> :out, :out => wout)
_, exit_status = Process.wait2(pid)
wout.close
output = rout.readlines.join("\n")
This works "most" of the time. I've done tests with hundreds of jobs and everything processes fine. But when I put it into production, some PHP commands are hanging indefinitely.
If I kill the hung processes and look at the logfile that the PHP command writes, the last log message is any number of seemingly random inconspicuous events (that is, I can't discern any pattern in how far the process gets before it hangs).
The PHP scripts to process jobs have been used in production for months but executed on cron. So the only thing that has changed is that they're being executed from this new job processor instead.
Am I approaching this the wrong way? Is Ruby somehow pausing/sleeping the process or something like that-- am I not reading the output properly and that is blocking it?
--- Edit ---
I switched to using the backtick operator to execute the command (blocking doesn't really matter since the Celluloid Actor is async):
output = `#{cmd}`
pid = $?.pid
exit_status = $?.exitstatus
And so far this is working without issue. How is using the backticks different?