0

I have a service A with pretty limited functionality: when clients request it, it just sends a request to service B. It can take quite some time: service A waits for a response from service B most (~99%) of the time. If I'd written it from scratch, I would have used some single-threaded environment like dart or node, but the problem is that it's written in php. So a child process is spawned per each http request, and each of them utilizes one CPU. So huge amount of CPUs is utilized for no good reason: each of them just waits for a response from service B.

Here is an image elaborating on what I have: enter image description here

Instead, I'd rather have a single process with a single thread, like in NodeJS: enter image description here

So, my question is how can I achieve this nodeJS-like behavior, when only single CPU is used?

I've checked some async libraries like swoole, reactphp, amphp, etc, but they mostly mention multiplexing, like when I have several concurrent requests and I can send them concurrently, not sequentially. But it's not what I need.

So, what options do I have?

Vadim Samokhin
  • 3,378
  • 4
  • 40
  • 68
  • first we need clarification of the exact scenario: Is service A a web application, and is it written in PHP? Is service B also a web application, and is it written in PHP? For each one, if the answer to any of those is No, then please specify what they are instead. And it's not clear what the specific problem is though. You say CPU is being wasted, but how? If service A needs to wait for a response and you want to do it sequentially, then what's the issue? Is your webserver running out of threads with which to spawn PHP processes, perhaps? PHP by itself is also single-threaded, btw. – ADyson Jun 17 '22 at 15:23
  • What takes a lot of CPU? A Waiting for B? Do you use sleep()? Btw. PHP is single threaded by default. What spawns more processes is the webserver. – Markus Zeller Jun 17 '22 at 15:30
  • Thanks. `I'd rather have`...because? Are you certain this would use less CPU? Might it be slower, since there are lots of requests using a single thread? If service B always takes a long time to respond (how long, exactly?), could you re-architect another way? Could you make background requests to it (e.g. with cron) and cache the results? How often does it return new data? Or does each request from each client need to be unique? And BTW async/multithreaded isn't necessarily the same as concurrent, that's a common misconception. And your PHP threads are effectively concurrent already anyway. – ADyson Jun 17 '22 at 16:38
  • And node doesn't necessarily need to be single-threaded either. It is by default but if you were to use worker threads there it would spread the load between different CPU cores anyway (although it's not a common setup). – ADyson Jun 17 '22 at 16:45
  • ... because in this case only single CPU is utilized. Each request takes roughly 1 second. Service A takes about 1% of that time, that is, about 10ms. So 100 requests can easily fit into a single concurrency unit (process/thread residing on a single CPU). – Vadim Samokhin Jun 17 '22 at 16:59
  • It can't be made with cron, as well as results can't be cached, due to the very business-nature of those requests. – Vadim Samokhin Jun 17 '22 at 17:00
  • I'd suggest something like asp.net which is really good at this sort of thing – ADyson Jun 17 '22 at 17:03
  • But 1s is not that slow really. What is the volume of requests causing high CPU? – ADyson Jun 17 '22 at 17:03
  • 1
    Just an idea: Upgrade to PHP8+ and use [Fibers](https://www.php.net/manual/en/class.fiber). – Markus Zeller Jun 17 '22 at 17:20
  • It's not that big, it's about 1k requests/sec. But for some reasons the number of cores can't be increased. – Vadim Samokhin Jun 20 '22 at 14:24
  • @MarkusZeller What for exactly? I thought some event-loop-based solution makes more sense in my case, no? – Vadim Samokhin Jun 20 '22 at 14:26
  • They can be suspended like a Promise to have async behavior. Another suggestion would be using messaging like MQQT. Your question is very opinion based. – Markus Zeller Jun 20 '22 at 14:34
  • But first of all, the whole application must be single-processed (probably with several running instances), right? Otherwise I'll have the same amount of php-fpm child processes utilizing the same amount of cores, won't I? And messaging is not an option, alas. – Vadim Samokhin Jun 20 '22 at 14:54
  • Using a [message queue](https://www.php.net/manual/de/function.msg-send.php) would be quite easy. A can send a msg to B. B queues and pulls one out, works on it, and takes next one when complete. If another A comes, it waits for B and does not use CPU resources. – Markus Zeller Jun 20 '22 at 15:05
  • Uhm, yeah, but the very nature of those requests is 1) synchronous and 2) must have sane upper bound, like 1-2 seconds. I get your point, but sadly it's just not an option. – Vadim Samokhin Jun 20 '22 at 15:36
  • 1
    I now wonder if this is the sort of architecture question which might be better suited to the Software Engineering stackexhange site. Maybe check their guidance. – ADyson Jun 20 '22 at 15:40
  • `the whole application must be single-processed (probably with several running instances)`...maybe I've missed a subtlety but how is this different to the current scenario really? If you have "several running instances" surely that means several php processes- which then must be in different threads. – ADyson Jun 20 '22 at 15:42
  • And if it was all single threaded then surely that would just slow everything down, or end up using just as much cpu to process things at the same rate. I suspect what would really help is a language / runtime which uses task based asynchronous programming and can offload I/O tasks (such as API requests over the network) onto separate threads, which then frees up the webserver thread to (part-)process other requests while waiting for the response. That doesn't imply concurrent programming necessarily, but it does make your server more efficient. Asp.net and node are both good at that AFAIK – ADyson Jun 20 '22 at 15:45
  • @ADyson `how is this different to the current scenario really` -- in my current version, I rely on OS processes doing context switching while php-fpm's child process waits for I/O (more on that [here](https://stackoverflow.com/questions/71815287/does-it-make-sense-to-run-more-php-fpm-children-than-number-of-cpu-cores) ). And this is very expensive operation. In case of event-loop, there is nothing to switch from because everything runs in a single process and a single thread. – Vadim Samokhin Jun 20 '22 at 15:56
  • Are you certain that's the main bottleneck? – ADyson Jun 20 '22 at 16:22
  • No. I want to validate that assumption before going all-in into this event-loop/async thing. Quick and cheap prototype with load testing script maybe. Any idea on that? – Vadim Samokhin Jun 20 '22 at 16:28
  • You also could create a lockfile when A is running and remove when B answered and A sent the reply. If another A comes, then deny the request, as long the lockfile is there. – Markus Zeller Jun 22 '22 at 14:18

0 Answers0