0

Host system:

Ubuntu Server x64 12.04 
mdadm raid 1 (/dev/sda /dev/sdb)
no lvm
dd bs=1M count=256 if=/dev/zero of=filename conv=fdatasync
avarage ~ 40 MB/s

NCQ on disks is disabled
WriteCache is disables

Guest system:

Ubuntu server i386 12.04
with lvm2 /10Gb /200Gb /200Gb disks all on lv-root (LV)
  --- Physical volume ---
  PV Name               /dev/vda5
  VG Name               root-vg
  PV Size               9.76 GiB / not usable 2.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              2498
  Free PE               0
  Allocated PE          2498

  --- Physical volume ---
  PV Name               /dev/vdb
  VG Name               root-vg
  PV Size               195.31 GiB / not usable 4.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              49999
  Free PE               0
  Allocated PE          49999

  --- Physical volume ---
  PV Name               /dev/vdc
  VG Name               root-vg
  PV Size               195.31 GiB / not usable 4.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              49999
  Free PE               0
  Allocated PE          49999

 dd bs=1M count=256 if=/dev/zero of=filename conv=fdatasync
    avarage ~ 30 MB/s
all disks in guest are RAWformat /VirtioBUS / No cache / IOmode=native

after some time write speed falls to 1 MB/s , but host system is not have loaded and dd test shows same 30-40 MB/s , cpu usage 10% . Guest reboot helps for a while. There is no errors/ faults / no mdadm rebuild or resync.

Have no idea where is a problem or where to dig.


Looks like this helps on guest: sync && echo 3 > /proc/sys/vm/drop_caches


Similar problem On a system with 64GB mem the Linux Buffer run full while copying with dd to dev null and io stops till manual drop_caches

MealstroM
  • 1,517
  • 1
  • 17
  • 32
  • I am not sure whether dd is a useful tool to benchmark disk performance. A better tool to use is bonnie++. Note that when running this test the amount of data you're working with should be larger than the amount of RAM because otherwise things like caching can mess up the results. – aseq May 15 '15 at 19:57
  • First you're using Ubuntu. Second you turned off NCQ (or you just don't have it because your disks are very low quality). – Michael Hampton May 15 '15 at 20:00
  • @Michael Hampton, I do not understand why using Ubuntu is making a difference? – Mircea Vutcovici May 15 '15 at 20:03
  • Host system uses software raid md. raid level 1: NCQ should be off, write cache should be off. About "benchmark disk performance" -- i dont benchmark it, it's spead about 20-30 MB/s on writting -- is OK to me. but some time later it drops to very low level, it is not possible to work with write speed max 1 MB/s at server. Why this happens - i dont know. – MealstroM May 15 '15 at 20:58

1 Answers1

0

I think what happens is that the initial performance of 30-40 MB/s is because of linux kernel's caching (and any other caching that may be going on on a hardware level). Once that caching has been "used up" actual disk access starts to kick in and performance drops.

In addition in order for dd to have better performance set the bs= argument to a reasonably large size. Personally I like to set it to about 1/3-1/2 of available ram. Your setting of 1M is sub optimal and is the main reason for the low performance numbers. But even with optimal bs= setting you would see a performance drop at some point as explained above.

aseq
  • 4,610
  • 1
  • 24
  • 48
  • dd bs=1024M count=1 if=/dev/zero of=filename conv=fdatasync ~ 20 MB/s # dd bs=128M count=8 if=/dev/zero of=filename conv=fdatasync ~ 22 MB/s # avarage RAM used 2 of total 16 GB ## 10 MB/s is slow, but i've got 1 MB/s after some time. thats looks like bug maybe, system isnt busy . – MealstroM May 15 '15 at 20:51
  • Look likes this helps. sync && echo 3 > /proc/sys/vm/drop_caches – MealstroM May 17 '15 at 11:53