2

How can I benchmark the network transfer speed of my servers? We already have an existing file server, but it is fairly old. We recently got a donation of a (relatively) newer P4 box, and I wanted to find a way to test its network+disk I/O speeds to determine if the speed benefit is worth the work to set up a new server.

We're using Debian Lenny as our OS, and all our clients are connecting via HTTP.

lfaraone
  • 1,611
  • 3
  • 18
  • 23

9 Answers9

5

If you wanted to just have a look on the bandwidth actually used, give nload a shot.

I always prefer testing the daemon which is serving clients (wget/curl when testing a webserver, lftp for ftp-servers, etc). Artificial tests like iperf are better to check the general throughput of your routers, switches, NICs and IP stacks.

HTH,
PEra

PEra
  • 2,875
  • 18
  • 14
2

iperf

Here ya go:

http://www.go2linux.org/how-to-measure-your-network-bandwidth-and-performance-with-iperf

MathewC
  • 6,957
  • 9
  • 39
  • 53
2

WGET will show transfer rates. Stick a large temporary file up on the server (say, dd if=/dev/zero of=tempfile bs=1M count=200) and bring it down w/ HTTP. Watch out if you're doing any kind of compression in your HTTP server-- I believe new WGET builds can do gzip encoding.

Edit:

One could argue that you should probably create a script w/ wget to poll a group of URLs that are similiar in composition as to what a "typical" visitor to the site would be accessing, as well. What I describe above is raw brute-force network / IO bandwidth testing (albeit influenced by disk caching, no doubt). Testing a potentially randomly-generated set of requests w/ wget would be another good test case, too. You could have a lot of fun with this one... >smile<

Evan Anderson
  • 141,881
  • 20
  • 196
  • 331
  • +1, for practical and quick method. Have used this myself couple of times. But, would this be good for taking a go-no-go decision for a server? – nik Jun 27 '09 at 05:59
  • 1
    That depends on your "go no-go" criteria. If the criteria is only throughput then yes. If you're talking about behaviour of application software being hosted by the machine then I'd say you need a more rigorous test suite. I've done test suites for web-based applications that involve polling various URLs with wget and parsing the results to see if the application responded properly. So, certainly, wget *can* be used for more advanced testing, but here the poster was asking only about throughput. – Evan Anderson Jun 27 '09 at 06:30
2

iperf. Or if you want to go old-school, ttcp.

http://www.carumba.com/src/ttcp.c

Jauder Ho
  • 5,507
  • 2
  • 19
  • 17
2

iperf for network bandwidth testing, bonnie++ and/or iozone for disk testing. All of them can be found in the debian repositories.

janneb
  • 3,841
  • 19
  • 22
1

I found this paper http://www.nik.no/2009/06-Hansen.pdf provides very good comparison of online and standalone bandwidth test tools. It includes Abget, Pathload, Netperf and Iperf.

sabre23t
  • 111
  • 3
1

Simple and quick solution:

wget http://myserver/large.file.avi

At the end wget command will print the throughput.

kubanczyk
  • 13,812
  • 5
  • 41
  • 55
0

I've had good luck with netperf for looking into network traffic performance issues. I found this debian link for it.

patjbs
  • 258
  • 2
  • 6
0

Give Siege a go.

Siege is an http regression testing and benchmarking utility.

Dave Cheney
  • 18,567
  • 8
  • 49
  • 56