3

This is my first question! Few days ago I discovered that something go wrong with nginx: domains A-C are ok, others timeout. Later: others domains are ok and first timeout; or each domain work fine. If I restart nginx - nothing change. After rebooting everything works fine.

Maybe the reason is that sometimes it's too many visitors and nginx drop connections that it can't handle? (Previously there was apache and it occasionally freezed VDS). But no errors in logs, nothing. In top output I see that there is only 2-4 mb of swap space used.

It's: arch linux, nginx, php-fpm.

config file: user http http;

worker_processes  1;

error_log /var/log/nginx/nginx.error.log;

events {
    worker_connections  2048;
}

http {
    include       mime.types;
    default_type  application/octet-stream;
    error_log   /var/log/nginx/http.error.log;

    sendfile        on;

    gzip        on;
    gzip_static     on;
    gzip_vary   on;

    client_body_buffer_size     1k;
    client_header_buffer_size   1k;
    client_max_body_size        5m;
    large_client_header_buffers 2 1k;

    client_body_timeout 10;
    client_header_timeout   10;
    keepalive_timeout   5 5;
    send_timeout        10;

    server  {
        listen      80;
        server_name www.A.com www.B.org www.F.net;
        if ($host ~* ^www\.(.+))    {set    $domain $1;}
        return  301 $scheme://$domain$request_uri;
    }

    server {
        listen       80; 
        server_name  A.com *.A.com B.org F.net;
        root   /home/user/public_html/$host;

        access_log /var/log/nginx/$host-access.log;
        error_log /var/log/nginx/server.error.log;

        location / {
            try_files   $uri $uri/ /index.php?$args;
            index       index.html index.htm index.php;
        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   /usr/share/nginx/html;
        }

        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        location ~ \.php$ {
            fastcgi_pass   unix:/var/run/php-fpm/php-fpm.sock;
            try_files $uri =404;
            fastcgi_index  index.php;
            fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
            include        fastcgi_params;
        }
    }

}

And of course, I think that I must to find the reason, not only fix the problem.

Many Thanks!

mr.frog
  • 129
  • 6

3 Answers3

0

Try increasing worker_connection tvice. Or if you have more than 1 core - increase worker_processes to number of cores.

V2NEK
  • 1
  • hm.. I have only 1 core:( I increased worker_connection, previously it was 1024 but problem still ennoying – mr.frog Mar 14 '13 at 18:32
0

This might help you... from Nignx wiki

client_body_buffer_size Syntax: client_body_buffer_size size Default: 8k|16k Context: http server location Reference: client_body_buffer_size

The directive specifies the client request body buffer size.

If the request body size is more than the buffer size, then the entire (or partial) request body is written into a temporary file.

The default size is equal to page size times 2. Depending on the platform, the page size is either 8K or 16K.

When the Content-Length request header specifies a smaller size value than the buffer size, then Nginx will use the smaller one. As a result, Nginx will not always allocate a buffer of this buffer size for every request.

serializing the requests with accept_mutex on may help too... I usually check php-fpm log too. The best thing is to check the behaviour how/why the server doesn't server the intended page. Log is the only friend we have here, so, if you know the time when the server doesn't responds to the request, there might be something in the log.

Oh, and Nginx reload can do the trick, instead of restart, which will halt the service for a while.

  • I checked every log but there's nothing to show. Can't really understand how buffer size connect with my problem:( I think there will be errors in log if nginx can't allocate buffer. And this can't explain, why 2 domains works fine when other 2 don't. – mr.frog Mar 21 '13 at 12:19
  • just consider buffer size is chunk of data transferred in one request, hence less buffer size more requests for server to serve. But buffer size cannot be too huge, the data may get lost, so, realistic buffer size would be 8K IMO. You can try that and observe –  Mar 21 '13 at 14:16
  • I changed buffer size. there is no effect unfortunately – mr.frog Mar 28 '13 at 10:31
  • what is the exact nature of error via browser, 500, 501, 502 ? I guess this is something to do with php memory and execution time too. I'd also check the error log of php-fpm.. can you give more information –  Mar 28 '13 at 11:30
0

If you have something similar - try to tracert and check DNS. I'm not sure but most likely our problem was with DNS servers.

mr.frog
  • 129
  • 6