4

I have a VPS server (WiredTree), running CentOS.

After experiencing some performance issues I created a simple benchmark for disk read/write speed using the following script:

echo Write to disk
dd if=/dev/zero of=test1 bs=1048576 count=2048
echo Read from disk
dd if=test1 of=/dev/null bs=1048576

Here's a sample output:

[bizwayz@host perf]./benchmark
Write to disk
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 11.2601 seconds, 191 MB/s
Read from disk
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 0.789302 seconds, 2.7 GB/s
[bizwayz@host perf]./benchmark
Write to disk
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 3.69129 seconds, 582 MB/s
Read from disk
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 0.789897 seconds, 2.7 GB/s
[bizwayz@host perf]./benchmark
Write to disk
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 9.56615 seconds, 224 MB/s
Read from disk
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 0.882664 seconds, 2.4 GB/s
[bizwayz@host perf]./benchmark
Write to disk
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 3.52512 seconds, 609 MB/s
Read from disk
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 0.784007 seconds, 2.7 GB/s

My question is whether it's normal for the write speed to be so much slower than read.

yby
  • 175
  • 2
  • 6

3 Answers3

4

You are running a VPS server. This means that there are other clients on your physical machine and how they use the disks impacts how you'll see read and write performance.

Typically on RAID10 you'll have about 1/2 the write rate as the read rate. But, since there are a lot of unknown variables, there could be another client doing a lot of writing to the disk and that is why you are seeing worse write speeds.

It can't hurt to open a ticket with them, but with a VPS, this is what you'll typically see. VPSes are for convenience and value, not for performance.

Edit: To be sure, caching is an issue here, but my point still applies.

Be sure to run the dd command with the fdatasync command to ensure it is actually flushing the file data to disk instead of just the memory, which the kernel does by default. ie:

dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync
Dave Drager
  • 8,375
  • 29
  • 45
  • 3
    Sorry for the -1, but with a test file of only 2 GB it's a virtual certainty that caching is wildly distorting the numbers. – Chris S Dec 20 '12 at 20:59
  • @Dave would you say this also applies to Amazon AWS instances? – yby Dec 20 '12 at 21:03
  • @yby - that would be EC2 instances, not AWS instances. And yes, unless you tell amazon to give you dedicated hardware, and pay accordingly, you're sharing the machine with others. – EEAA Dec 21 '12 at 01:24
  • I've modified the script to write an 8GB file (twice the size of RAM in that VPS) and the results make more sense now.. Thanks for all the help @Dave and all others! – yby Dec 22 '12 at 08:13
2

Yes, it's normal. Your file is only around 2GB and fit's completely into the cache. It's actually never read from the disk, just from the cache. Make the file size at least 10 times bigger to get any meaningful results or even more, depending one your RAM size (2x the RAM is a good starting point).

I'd really like to have a disk with 2.7 GB/s of read speed :)

Sven
  • 98,649
  • 14
  • 180
  • 226
  • Where do you see that? Looks like his tests are with 2 GB data, not 200 MB data. – HopelessN00b Dec 20 '12 at 20:21
  • @HopelessN00b: Yes, you are right, missed a digit when doing the calc. 2GB is small though in most cases. Having just 2GB of RAM on a mostly unused system would still mean most of the file is cached, affecting the read speed. – Sven Dec 20 '12 at 20:26
1

One problem in your testing technique is the internal system buffering in Linux that is being taken advantage which greatly skews your results.

In general, disk writes are slower than disk reads of course. At a logical file level writes might be much slower since there are (1) disk allocation processes, (2) updating directory information...etc. Thus there are more operations when a file level write happens, it is not a simple atomic operation easily compariable with file level read operations.

In the benchmark, you need to clear out the buffer cache between each dd of your testing....or reboot your machine between each step :-). BTW, I believe there is a simple way to do this by writing something to an applicable /proc area.

EDIT: The cache clearing process:

 sync && echo 3 > /proc/sys/vm/drop_caches

You should do this BEFORE each dd command.

mdpc
  • 11,856
  • 28
  • 53
  • 67