5

I'm getting a DoS attack on a wordpress site that I host.

173.192.109.118 - - [30/Sep/2015:22:31:36 +0000] "POST /xmlrpc.php HTTP/1.0" 499 0 "-" "Mozilla/4.0 (compatible: MSIE 7.0; Windows NT 6.0)"

I get about ~140 of these logs in my nginx access log (took about 10 seconds, so ~14 req/second), and then they switch to 502:

173.192.109.118 - - [30/Sep/2015:22:31:46 +0000] "POST /xmlrpc.php HTTP/1.0" 502 537 "-" "Mozilla/4.0 (compatible: MSIE 7.0; Windows NT 6.0)"

At that point, PHP-FPM has to be restarted to restore the site.

So, my question is: Is there anything that I can do to prevent one lone attacker from crashing PHP-FPM?

Most of my (limited) experience has been with Apache, so any advice would be greatly appreciated.

I tried to set sane limits on everything. The server has plenty of RAM under load, so that doesn't seem to be the issue. I just added a rate-limiter from the following tutorial: https://www.howtoforge.com/rate-limiting-with-nginx, and while that appears to delay the agony, it still ends up crashing PHP-FPM.

The /var/log/php5-fpm.log doesn't seem to show anything interesting or useful other than a couple of errors that I introduced when I forgot to add a leading / in the config file, and a bunch of success lines from restarting:

[30-Sep-2015 23:03:51] ERROR: Unable to create or open slowlog(/usr/log/www.log.slow): No such file or directory (2)
[30-Sep-2015 23:03:51] ERROR: failed to post process the configuration
[30-Sep-2015 23:03:51] ERROR: FPM initialization failed
[30-Sep-2015 23:05:47] NOTICE: configuration file /etc/php5/fpm/php-fpm.conf test is successful

/etc/php5/fpm/pool.d/www.conf

[www]
user = www-data
group = www-data
listen = /var/run/php5-fpm.sock
listen.owner = www-data
listen.group = www-data
pm = dynamic
pm.max_children = 5
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3
pm.status_path = /status
ping.path = /ping
ping.response = pong
slowlog = /var/log/php-fpm_$pool.slow.log
request_slowlog_timeout = 30
request_terminate_timeout = 30
chdir = /

/etc/nginx/nginx.conf

user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
    worker_connections 768;
}
http {
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;
    include /etc/nginx/mime.types;
    default_type application/octet-stream;
    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;
    gzip on;
    gzip_disable "msie6";
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_buffers 16 8k;
    gzip_http_version 1.1;
    gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;

    limit_req_zone  $binary_remote_addr  zone=one:10m   rate=1r/s;

    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}

/etc/nginx/sites-enabled/example.com

server {
  server_name localhost www.example.com;
  return 301 http://example.com$request_uri;
}
server {
    listen 80 default_server;
    listen [::]:80 default_server ipv6only=on;
    root /var/www/html;
    index index.php index.html index.htm;
    server_name example.com;

    client_max_body_size 500M;

    location / {
        try_files $uri $uri/ /index.php?q=$uri&$args;
    }

    error_page 404 /404.html;

    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
        root /var/www/html;
    }

    location ~*  \.(jpg|jpeg|png|gif|ico|css|js|woff)$ {
            expires 365d;
    }
        location ~ \.php$ {
            try_files $uri =404;
            fastcgi_split_path_info ^(.+\.php)(/.+)$;
            fastcgi_pass unix:/var/run/php5-fpm.sock;
            fastcgi_index index.php;
            fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
            include fastcgi_params;
            limit_req zone=one burst=5;
        }

    location /status {
        fastcgi_pass php;
    }

    location /ping {
        fastcgi_pass php;
    }

    location ~ /\. {
        deny all;
    }
}

** UPDATE **

I've updated the title to reflect my question a little better, in hopes that I will attract some quality discussion on PHP-FPM tuning.

As a secondary question, and possibly more important than my first question, I am wondering: How do I tune/harden PHP-FPM to utilize all my available server resources without crashing first.

Apache / PHP may not have been as efficient, but it didn't stop serving requests until the server was brought down to its knees, and then when the attack was over, the site was back up. It seems rather unpleasant to have to manually restart a service that got slightly overworked. (14 req/second is really nothing)

I agree with the ideas to utilize fail2ban to mitigate DoS attacks, but what I'm really worried about is what will happen if/when regular traffic gets to 15 req/second?

ryebread
  • 163
  • 1
  • 7
  • Are all of the requests from the same IP? – EEAA Oct 01 '15 at 00:33
  • Yes, they are, which I just blocked with iptables, but I would like to know how to harden this setup, and make it more reliable. I would think that the bottleneck should be RAM & CPU, rather, if configured properly... – ryebread Oct 01 '15 at 00:44
  • Use fail2ban to block these sorts of attackers. – Michael Hampton Oct 01 '15 at 02:01
  • @AndréBorie: I uncommented the slow-log directive *after* the crash happened, just to see if I could get additional logging, and inadvertently forgot to add a full path to the logfile, which kicked off the errors. Before I did that, there weren't any errors at all in the log, and it was still crashing when under attack. :( – ryebread Oct 02 '15 at 14:47
  • Really weird. Can you check whether after the attack the PHP-FPM child processes are still running or are completely gone ? – André Borie Oct 02 '15 at 14:52
  • By the way, Fail2ban is a horrible idea. It's like trying to patch a leak with duct tape. It may work, but it's not the best solution and it's definitely not a solution against a process crashing under load. Instead, fix the problem that crashes the process and eventually rely on Nginx's built-in rate limiting to stop the attack if it's still problematic after the crashing problem is fixed. – André Borie Oct 02 '15 at 14:54
  • Yes, I could still see several `PHP-FPM` processes in `top`, but the site never came back up after the attack, (Nginx just threw a 500 page) which tells me they were hung or something. Thank you for your help, André! I don't have a whole lot of experience, but I was thinking that using `fail2ban` is just hiding the real issue, which is likely to resurface later, in some other way. I will still implement it, since I think it is great for certain kinds of attacks, such as brute-forcing the login, etc., but I want to get to the root of the crashing issue. – ryebread Oct 02 '15 at 15:29
  • @ryebread Did you ever come to a conclusion on this issue? Been having the same problems myself the past couple weeks, and can't come up with a solid solution!! Thanks for the help! – Starboy Jan 19 '16 at 07:29
  • @Starboy: I wish I could say that I did... I have not. There seem to be some attacks out in the wild that target PHP-FPM, and my only "solution" at this point was to put the server behind Cloudflare, which shields it from some of these malformed requests. Not a solution, but it works for me until I can get some better resolution. – ryebread Jan 19 '16 at 15:05
  • @ryebread I've looked deeper into this, and found out you can essentially put in your .htaccess a Deny, All in relation to xmlrpc.php. This should clear up your problems! xmlrpc.php is used in wordpress for some statistics stracking and essentially remote blogging. If you use said functions you can setup NGINX to allow only certain IPS to access xmlrpc.php. (Hope this helps!) – Starboy Jan 19 '16 at 20:14

1 Answers1

0

Basically you have the following choices:

  • use packet filter blocking
  • use nginx blocking like

location / { deny xx.xx.xx.xx; allow all; }

  • increase pm.max_children to a number equal cpu cores x 2, 5 is way too low - may be after increasing it will be just able to handle 14 requests per second, which actually isn't that big number. Furthermore, you are using nginx limit_req directive to limit the request rate, I'd suggest that you add another zone and configure it with lower burst size or nodelay.
drookie
  • 8,625
  • 1
  • 19
  • 29