6

I'm trying to understand the performance impact of having multiple nginx instances (masters) running on the same machine, rather than having them all load into a single instance using different server blocks. How does the use of multiple instances of nginx impact things like worker_process and worker_connections optimization?

I see tons of advice indicating that worker_process should mirror the number of cores, and at most should be double the number of cores. I'm also to understand that the worker_connections should match the ulimit, or be a bit under the ulimit. Making too many connections available, or having too many workers per core is supposed to hurt performance.

I have two cores and a ulimit of 1024, but I have 4 instances of nginx each of which has the following settings:

worker_processes 4;
worker_connections: 1024;

Doesn't this have the same effect as if I had worker_processes 16; and worker_connections 4069;?

Note: Let me make clear when I say nginx instances, I mean that there are 4 independent master nginx process each fed a different config file which has similar settings, and each with their own workers.

Note 2: This scenario is something I've inherited and is already in place. I am trying to figure out if I should change the way nginx is configured and have an informed reason for it.

eddiemoya
  • 167
  • 1
  • 6
  • Can you give the reason why you want to do this? – Halfgaar Aug 11 '15 at 20:55
  • @Halfgaar I don't want to do this, this has been done. I am trying to determine if this should be undone, and I'd like to have a good reason to change it. – eddiemoya Aug 11 '15 at 22:23
  • If you have problems with nginx's performance then it should be undone, otherwise let it alone and concentrate on PHP/DB/etc. – Alexey Ten Aug 12 '15 at 06:46
  • I suspect it doesn't make much of a difference, because nginx has no active threads running like Apache. It's event driven, and I don't think the kernel cares which process it has to call back on incoming data. – Halfgaar Aug 12 '15 at 06:55

1 Answers1

3

From a system point of view, there is no inherent differences in running 4 masters with 4 server sections, or a single master with 16 server sections. It implements the same architecture : parallelized event-based processes.

The worker/core ratio must be accounted wrt the total of workers across all of your masters if you have several ones. This comes from several constraints :

  • make sure CPUs are not overloaded, thus number of workers should be <= number of cores
  • make sure parallelization and OS scheduling is used to its best, thus number of workers should be as high as possible
  • HTTP server workers are low on CPU usage and mostly wait on I/Os, thus it'as actually safe to allocate something between 2 or 4x the number of cores

It should be a little more efficient with a single master, since a few ressources like MIME maps and so on will only be loaded once. But that's a minor point.

It should be more efficient with a single master, because there is a single large pool of workers shared by all servers. If a single server momentarily requires most workers (say 16), it might get them. On multi-master configuration (say 4 masters with 4 workers), they only get to use at most what they have : 4 workers. On the other hand it might be the desired effect : strictly split in 4 instances to make sure each one always get at least the quarter of your host attention. But never more.

It should be easier to configure and maintain with 1 master (think: security updates).

It should be more resilient with 4 masters : you're allowed to crash or totally mess one master configuration without touching the 3 others.

Unless your 4 masters use different Nginx versions, you won't benefit from uber-optimisations like having the exact set of modules compiled in for each master.

zerodeux
  • 656
  • 4
  • 6
  • A point of clarification. Are you saying that the number of workers combined across all masters should not exceed the number of workers you would use if it were a single master? It makes sense, I just want to make sure. For example if I have 4 cores, and want to use 2 workers per core I would set 8 workers. If have more than one masters, I should divide that 8 among the masters. If this is what you mean, it makes sense to me. – eddiemoya Sep 01 '15 at 15:06
  • Yes, that's what I'm saying. – zerodeux Sep 02 '15 at 16:05
  • At the beginning of your answer you said it shouldn't make a difference, but by the end it seems like you might have convinced your self that it does in fact improve performance to have a single master. It makes sense to me, as you said, by having them together then nginx has all the resources at its disposal to which ever servers need them. Split up they are each limited. – eddiemoya Sep 02 '15 at 16:14
  • Do you know of any literature or documentation that might explain this? You said was a much as I had assumed, but I'd like to know if were just guessing or not. – eddiemoya Sep 02 '15 at 16:14
  • 1
    Litterature is missing IMHO. This comes from experience, I've been hosting various things for +15 years... – zerodeux Sep 03 '15 at 22:16