1

I have two 3TB disks in a software RAID 1 setup and the host OS is 64bit Debian wheezy.

Issuing:

dd if=/dev/zero of=test bs=64k count=3k oflag=direct && rm test

Yields:

201326592 bytes (201 MB) copied, 1.423 s, 141 MB/s

If I alter the dd command to use synchronized IO calls (by changing the oflag switch from "direct" to "sync"), write peformance drops through the floor:

201326592 bytes (201 MB) copied, 76.0286 s, 2.6 MB/s

Obviously, synced IO results in a performance hit but I was expecting write throughput to drop to maybe half or a third (worst case) of the direct equivalent. 2.6MB/s seems extreme and makes me think there's a problem somewhere.

corford
  • 113
  • 5

1 Answers1

2

Synchronised IO stops until the block is written to disk and confirmed by the controller, so you end up waiting for at least one seek time between blocks. You're getting about 40 64k blocks per second, or one every 25 miliseconds. That's consistent with spending one 10ms seek time writing the data and another one updating the metadata in the inode, plus a bit of OS overhead.

This is why you don't want to use synchronised IO unless you really need strong consistency.

pjc50
  • 1,720
  • 10
  • 12
  • 1
    "This is why you don't want to use synchronised IO..." This is why you want to use hardware RAID with a BBU – JamesRyan May 09 '13 at 11:32