2

I'm using Nginx to Serve Static Files on Dedicated Servers. The server has no website, it is only a File Download Server. File sizes range from MB to GBs.

Previously I had 8 Dedicated Servers with 500 Mbps at unmetered.com. Each of them was performing great.

I thought to buy a 10Gbps server from FDCServers. Because one is easy to manage than multiple servers.

Below are specs of server:

Dual Xeon E5-2640 (15M Cache, 2.50 GHz, 7.20 GT/s Intel® QPI) - 24 Cores 128 GB RAM 10 Gbit/s Network Unmetered Ubuntu 14.04 LTS 1.5 TB SATA

But my new giant server is not giving speed more than 500 to 600 Mbps. I installed nload to monitor traffic and upload/download speed. It is reporting almost same as previous unmetered.com servers.

Then I thought that it might be due to Read rate limitation of SATA hard disk.

So I purchased and installed 3 X 240 GB SSD Drives in New Powerful server.

I moved file into SSD Drive and downloaded it for testing purpose. Speed is still not good. I'm getting only 250 to 300 Kbps. Whereas It should give me at least 2Mbps (Which is the speed limit per IP I placed in Nginx Configuration Files).

I then searched on Gigabit Ethernet Tuning settings. Found couple of sysctl settings that need to be tuned for 10Gbps network.

http://www.nas.nasa.gov/hecc/support/kb/Optional-Advanced-Tuning-for-Linux_138.html

I implemented them but still throughput is same like my previous 500Mbps servers.

Can you please help in to improve the Network throughput of this server. I asked FDCServer support team and they confirmed that their server's can easily give 3 to 5 Gbps and they can't help me to tune it.

After all tuning and setting I'm getting only 700Mbit at most.

Let me know if you need more details.

Umar Hayat
  • 111
  • 1
  • 10

2 Answers2

5

Perform the test memory:

for DDR3 1333MHz PC10600

$ dd if=/dev/zero bs=1024k count=512 > /dev/null
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 0.0444859 s, 12.1 GB/s

test disk io:

$ pv ./100MB.bin > /dev/null
 100MiB 0:00:00 [3.36GiB/s] [=================================================================================================================================================================================>] 100%

test cpu speed with the help pipe:

$ dd if=/dev/zero bs=1024k count=512 2> /dev/null| pv > /dev/null
 512MiB 0:00:00 [2.24GiB/s] [   <=>                                                                                                                                                                                             ]

speed nginx download from localhost should be ~1.5-2 GB/s

cheking:

$ wget -O /dev/null  http://127.0.0.1/100MB.bin
--2014-12-10 09:08:57--  http://127.0.0.1:8080/100MB.bin
Connecting to 127.0.0.1:8080... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: ‘/dev/null’

100%[=======================================================================================================================================================================================>] 104,857,600 --.-K/s   in 0.06s   

2014-12-10 09:08:57 (1.63 GB/s) - ‘/dev/null’ saved [104857600/104857600]

Check this solution.

remove lines:

output_buffers 1 512k;
aio on;
directio 512;

and change

sendfile    off;
tcp_nopush  off;
tcp_nodelay off;

to

sendfile    on;
tcp_nopush  on;
tcp_nodelay on;

good luck

Dmitriy
  • 66
  • 1
  • 2
2

I think you need to split the issues and test independently to determine the real problem - it's no use guessing it's the disk and spending hundreds, or thousands, on new disks if it is the network. You have too many variables to just change randomly - you need to divide and conquer.

1) To test the disks, use a disk performance tool or good old dd to measure throughput in bytes/sec and latency in milliseconds. Read data blocks from disk and write to /dev/null to test read speed. Read data blocks from /dev/zero and write to disk to test write speed - if necessary.

Are your disks RAIDed by the way? And split over how many controllers?

2) To test the network, use nc (a.k.a. netcat) and thrash the network to see what throughput and latency you measure. Read data blocks from /dev/zero and send across network with nc. Read data blocks from the network and discard to /dev/null for testing in the other direction.

3) To test your nginx server, put some static files on a RAMdisk and then you will be independent of the physical disks.

Only then will you know what needs tuning...

Mark Setchell
  • 191,897
  • 31
  • 273
  • 432