4

I wanted to measure my disk throughput using the following command:

dd if=/dev/zero of=/mydir/junkfile bs=4k count=125000

If the junkfile exists, my disk throughput is 6 times smaller than if junkfile does not exist. I have repeated that many times and the results hold. Anybody knows why?

Thanks,

Amir.

wkl
  • 77,184
  • 16
  • 165
  • 176
Amir
  • 5,996
  • 13
  • 48
  • 61

1 Answers1

3

In order to minimize disk caching, you need to copy an amount significantly larger than the amount of memory in your system. 2X the amount of RAM in your server is a useful amount.

from http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm

Eric Fortis
  • 16,372
  • 6
  • 41
  • 62
  • I think then my question is if some caching is going on, then why the second or third time is slower? It is supposed to be faster as it might be cached? – Amir May 21 '12 at 20:50
  • I can not reproduce this behavior, after the first time it's a bit slower but not even a MB/s of difference. Try bonnie++ – Eric Fortis May 21 '12 at 21:18