1

Under Load

I have a small NodeJS app, with an Nginx front to serve the static files to increase performance.

The bundle.js file takes around 1s with no load, however, add some concurrent users and the bundle.js takes quite the hit!

This is running in k8s, however it's a brand-new environment so there should be no other influences.

The CPU doesn't spike, the memory is at it's lowest useage, and I've tried all the usual send_file tricks - is there anyway I can get some throughput here?

default.conf

server {
    listen 80;

    root /mnt/app;
    index index.html index.htm;

    location ~*  \.(jpg|jpeg|png|gif|ico|css|js)$ {
        expires 365d;
    }

    access_log off;

    open_file_cache          max=2000 inactive=3600s;
    open_file_cache_valid    3600s;
    open_file_cache_min_uses 1;
    open_file_cache_errors   off;

    location /public/ {
        try_files $uri $uri/ =404;
    }

    location /Healthcheck/ {
        proxy_pass http://localhost:8080;
    }

    location /auth/login/ {
        proxy_pass http://localhost:8080;
    }

    location /auth/connect/ {
        proxy_pass http://localhost:8080;
    }

    location /data/ {
        proxy_pass http://localhost:8080;
    }
}

http.conf

user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
 connect-frontend-nginxpid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;
    sendfile_max_chunk 512;
    # server_tokens off;

    keepalive_timeout  65;

    gzip  on;

    include /etc/nginx/conf.d/*.conf;
}

gzip.conf

## Compression.
gzip on;
gzip_buffers 16 8k;
gzip_comp_level 1;
gzip_http_version 1.1;
gzip_min_length 10;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript image/x-icon application/vnd.ms-fontobject font/opentype application/x-font-ttf;
gzip_vary on;
gzip_proxied any; # Compression for all requests.
## No need for regexps. See
## http://wiki.nginx.org/NginxHttpGzipModule#gzip_disable
gzip_disable msie6;

## Serve already compressed files directly, bypassing on-the-fly
## compression.
##
# Usually you don't make much use of this. It's better to just
# enable gzip_static on the locations you need it.
# gzip_static on;

As per @Tim's comments - CURL output:

My Machine

10.244.0.1 - - [07/Jun/2018:08:15:45 +0000] "GET /Public/JS/Auth/bundle.js HTTP/1.1" 200 1007996 "http://51.144.234.135/auth/login/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36" request-time: 0.579 Upstream-time: - .

CURL localhost

127.0.0.1 - - [06/Jun/2018:15:04:37 +0000] "GET /Public/JS/Auth/bundle.js HTTP/1.1" 200 3081261 "-" "curl/7.60.0" "-"

CURL to IP - From host

10.244.0.1 - - [06/Jun/2018:15:04:57 +0000] "GET /Public/JS/Auth/bundle.js HTTP/1.1" 200 3081261 "-" "curl/7.60.0" "-"
Stuart.Sklinar
  • 157
  • 2
  • 12
  • 1
    Can you please edit your post to include the exact access log entry that corresponds to the slow serving of bundle.js. It'd be interesting to see if you could reproduce that with a curl from the server - if you do add that log entry too. Nginx proxy cache could help this a lot, but you'll have to be a bit careful with caching headers - either setting them in node or overwriting them in Nginx. – Tim Jun 05 '18 at 20:05
  • I've updated and added the CURL output – Stuart.Sklinar Jun 06 '18 at 15:07
  • Tried to add response times, however the docker-nginx just seems to ignore my logging format – Stuart.Sklinar Jun 06 '18 at 15:17
  • 1
    Keep trying, that's the key to working out what's going on. Try "nginx -T" (from memory) to work out where the configuration file is stored. – Tim Jun 06 '18 at 20:07
  • OK - Updated. Localhost is still instant - therefore I've only added the log from my machine – Stuart.Sklinar Jun 07 '18 at 09:12
  • 1
    The key thing I was looking for was request time in the log, which might tell me whether the problem was with Nginx or node. I think you're going to have to do some problem solving yourself, turning on sufficient logging to work out exactly where the request goes and the time so you can work out where it's slow. – Tim Jun 07 '18 at 17:01
  • Sacked it off and went azure CDN – Stuart.Sklinar Jun 08 '18 at 19:19

0 Answers0