4

I am having trouble serving concurrent time consuming requests coming from the same IP.

  • First request should take 6 minutes to respond (this is normal behaviour, my question is not about taking less time to respond)
  • Second request should take less than 100ms to respond.

What is happening is that the server is waiting for the first request to finish before sending the second request response.

My configuration is an AWS EC2 with 2 vCores (I believe that this is useful to handle concurrent computing).

The request goes through an Nginx server, to a php-fpm process. I thought that the problem was that I misconfigured PHP-FPM. However, after reading information, this is my php-fpm configuration:

$ cat www.conf | grep max_children     
;   static  - a fixed number (pm.max_children) of child processes;
;             pm.max_children      - the maximum number of children that can
;             pm.max_children           - the maximum number of children that
pm.max_children = 5

$ cat www.conf | grep start_servers    
;             pm.start_servers     - the number of children created on startup.
pm.start_servers = 2

$ cat www.conf | grep min_spare_servers
;             pm.min_spare_servers - the minimum number of children in 'idle'
; Default Value: min_spare_servers + (max_spare_servers - min_spare_servers) / 2
pm.min_spare_servers = 1

$ cat www.conf | grep max_requests     
;pm.max_requests = 500

$ cat www.conf | grep max_children     
;   static  - a fixed number (pm.max_children) of child processes;
;             pm.max_children      - the maximum number of children that can
;             pm.max_children           - the maximum number of children that
pm.max_children = 5

What am I missing? Where should I look to debug that behaviour

Don't hesitate to tell me in the comments if you need more information to help me, I'm just a junior...

Thank you all, and have a good week-end

Hammerbot
  • 175
  • 2
  • 8

1 Answers1

4

Your number of running PHP workers is really low, so it might be that be that the first request uses all available workers to finish, and therefore the second request is blocked.

Try with these settings:

pm.max_children = 20
pm.start_servers = 5
pm.min_spare_servers = 3

The actual useful numbers depend on your actual traffic. Basically, max_children is the number of maximum simultaneously available workers that can serve individual requests, and you need to have a proper value there which matches your traffic.

Tero Kilkanen
  • 36,796
  • 3
  • 41
  • 63
  • Hi, thank you for your response. Your solution does not work in my case, is their something that tells php-fpm to use the same server or child for the same IP? – Hammerbot Jan 15 '18 at 15:55
  • As far as I know, there isn't any such mechanism. But did you test increasing the values? Didn't anything change then? – Tero Kilkanen Jan 15 '18 at 17:46
  • Yes, I tested and it did not change anything... I tried monitoring the use of the processors but one is still at 0% during the request so I really don't understand. – Hammerbot Jan 16 '18 at 12:14
  • I tried with `ab` and the processes are starting correctly. Even with requests from same IP. Now I have doubt about the capacity of php fpm to use a second process as soon as a second request comes in, I feel like it starts to use a second process only when one process is "full with request". Isn't there some minimum queued requests before using a second process? – Hammerbot Jan 17 '18 at 09:08
  • 2
    After all, the problem was that I use symfony, and the framework was making some kind of session lock. Create multiple logins for my team as well as declaring the login method as `stateless` solved the problem. Thank you for your help! – Hammerbot Jan 29 '18 at 08:31