1

I'm setting up NGINX as a SSL reverse proxy for three small web apps with dynamic (PHP) and static content.

What would be considered best practice security-wise and performance-wise when it comes to passing PHP requests?

Should they be passed to the requested web server (NGINX – which would then pass it to PHP-FPM via socket or TCP on the same host) or should they directly be passed to the PHP-FPM server?

All my web apps and the reverse-proxy are in separate Jails on FreeBSD. Each Jail has it's own NGINX web server and PHP-FPM (or uWSGI and Python).

basbebe
  • 313
  • 2
  • 16
  • I assume each one of your webapps has its own Nginx and you separate them for security purposes. If that's the case, each app should also have its own separate PHP-FPM, otherwise you don't gain much security by having separate web servers while having still a single PHP-FPM for all three apps (especially since I would consider PHP a bigger attack surface than Nginx). –  Mar 24 '15 at 14:07
  • Thanks @AndréDaniel for pointing that out. Each Jail has it's own Nginx web server and it's own PHP-FPM. I'll edit the question to make that more clear. I hat security and performance in mind when I decided to go for that approach (caching is done by the reverse-proxy). – basbebe Mar 24 '15 at 14:10
  • Then your main Nginx reverse-proxy should just forward the requests to the corresponding Nginxes in each app's jail, which in turn would forward PHP-file requests to its own PHP-FPM. If the PHP-FPM of the jail is compromised, it *shouldn't* be able to break out of the jail and affect the main server or the other apps. On the other hand, if you use a single PHP-FPM for all three hosts, an exploit in that FPM would mean all three hosts are compromised. –  Mar 24 '15 at 14:13
  • But wouldn't I gain performance if I forwarded PHP-requests to the PHP-FPM in the jails directly? Is there a security risk in there? – basbebe Mar 24 '15 at 14:14
  • For better performance, I suggest always using UNIX sockets, both for the in-jail FPM as well as for communication between the in-jail Nginxes and the main reverse proxy (nginx can listen on UNIX sockets and your main reverse-proxy should have read access to that in-jail socket which I assume won't be an issue). Also use `keepalive 3600` (or a similar but relatively high value) to keep the socket connections open long enough as to minimize the overhead of opening a connection when a request comes in. –  Mar 24 '15 at 14:15
  • Which keep alive? The `keepalive_timeout`in Nginx? I never heard that using UNIX sockets to communicate between Jails would be possible. I think I don't know how I would do that. Is that even a good idea security-wise? – basbebe Mar 24 '15 at 14:18
  • Sorry I misunderstood you, I thought you were talking about having a single PHP-FPM for all three apps. I don't see *much* risk (but it doesn't mean there is none, maybe there's a vulnerability in how Nginx forwards requests to FastCGI) in forwarding requests to in-jail FPMs directly from the main reverse proxy, other than a configuration issue (I prefer having the jails self-contained and exposing a single HTTP port, and their FastCGI-related config is in their own nginx, rather than outsourcing their FCGI config to the main Nginx). –  Mar 24 '15 at 14:19
  • No, UNIX sockets between jails aren't possible because each jail can't access the other's files (and such a socket behaves exactly like a file). However, in-jail UNIX sockets are possible and UNIX sockets between the main host and the jail are possible if it's the main host that's accessing the in-jail socket (as the host can read any file in the jail anyway). And I don't see much of a risk there, UNIX sockets are similar to TCP and it's generally accepted that they're safe. –  Mar 24 '15 at 14:21
  • No, the keep alive parameter I meant was [this one](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive) and I was wrong on my last comment, its argument is not the time to keep the connection alive but the number of connections, in which case 3600 seems way overkill (though it works just fine on my servers), I suggest you set something like 10. –  Mar 24 '15 at 14:25
  • Thank you very much @AndréDaniel! Since the reverse-proxy is also in it's own jail I will stick with the TCP communication between the proxy and the web apps. Configuration is a good point. I guess you're right that having the web server connect to it's PHP-FPM entity over a UNIX socket makes configuration easier. – basbebe Mar 24 '15 at 14:25
  • Does my explanation answer your question ? (so that I can summarize all of it ans post it as an actual answer) –  Mar 24 '15 at 14:45
  • Yes it does, thanks. The point most important for me is the simplified configuration of the web app by having as many configurations as possible in it's own web server and have that communicate with PHP-FPM with a fast UNIX socket. Now I only need to find out which settings to set in the backend and which in the proxy (`cache-control`, `keepalive`, `error_page`, etc…) – basbebe Mar 24 '15 at 14:48

1 Answers1

2

Your main Nginx should act as a reverse-proxy and forward HTTP requests to the respective web server of each app. If the main reverse-proxy has file-level access to the app's jails, you better use UNIX sockets to communicate with its web server, but in your case you have no choice but to use TCP.

When using TCP, make sure to set the keepalive parameter to maintain a number of open connections at all times, so that you don't have to open and close a connection on each request for better performance. The parameter's argument is the number of connections to keep open, something like 10 seems enough.

In your jails, the web server in there should use UNIX sockets to communicate with its PHP-FPM for better performance (TCP has more overhead than an UNIX socket, so use the latter wherever possible).

Finally I see no major security issues in having the main reverse-proxy communicating directly with the in-jail PHP-FPMs, but that would mean you should also configure the main reverse-proxy according to the in-jail PHP-FPM. That's something I'd rather avoid, I would prefer the jails to be self-contained and expose a single HTTP endpoint on a default port, and have the in-jail Nginx handle all the PHP-FPM stuff. If there's something you need to change in regards to PHP-FPM, you just do it in the jail without touching your main Nginx reverse proxy.

Also I suggest you try an even lighter web server for the jails like Lighttpd since you really don't need much features in there and even though Lighty's configuration syntax is absolutely horrible it shouldn't be a problem.

About your last comment

Now I only need to find out which settings to set in the backend and which in the proxy (cache-control, keepalive, error_page, etc…)

The keep-alive parameter I mentioned should be set in the upstream block of the main Nginx reverse proxy and only affects the reverse-proxy <-> in-jail server communication and has nothing to do with HTTP keep-alive between the clients and your server. For keepalive between browsers and servers, it should be done on the last endpoint on your side, which is the reverse-proxy. On the other hand, cache-control headers are app-dependent (as different apps may need different settings) and should be set individually in the app's jails. Try to put as much settings as possible in each app's jail, and only modify the reverse-proxy's configuration in extreme cases like connection-level settings (HTTP keepalive, TLS, etc).

  • Thank you for this in-depth answer! I guess I will stick with Nginx though since I got used to it's functions (rewrites etc.) and I'm lazy. So when i set headers in the backend they don't necessarily get overwritten by the proxy? How do settings in the backend like `client_body_timeout`affect the communication between the backend and the proxy? – basbebe Mar 24 '15 at 15:24