I've been looking at this for a while now and things aren't lining up with my expectations, but I don't know if it's because something is off, or if my expectations are wrong.
So, I've got a system with over 100GB of RAM, and I've set my dirty_background_bytes
to 9663676416 (9GB) and dirty_bytes
to 2x that (19327352832 or 18GB)
In my mind, this should let me write up to 9GB into a file, but really it just sits in memory and doesn't need to hit disk. My dirty_expire_centisecs
is the default of 3000
(30 seconds).
So when I run:
# dd if=/dev/zero of=/data/disk_test bs=1M count=2000
and ran:
# while sleep 5; do egrep 'Dirty|Writeback' /proc/meminfo | awk '{print $2;}' | xargs; done
(Printing Dirty bytes in kb, Writeback in kb, and WritebackTmp in kb at 5s snapshots)
I would have expected to see it dump 2GB into the page cache, sit there for 30 seconds, and then start writing the data out to disk (since it never went above the 9GB background ratio)
Instead what I saw was:
3716 0 0
4948 0 0
3536 0 0
1801912 18492 0
558664 31860 0
7244 0 0
8404 0 0
Where as soon as the page cache jumped, it was already writing data out, until we were back down to where we started.
What I'm actually working on is basically trying to see if my process bottleneck is disk IO or some other factor, but in the middle I got confused by this behaviour. I figure so long as the process is still running in the buffer-zone disk write performance shouldn't really be relevant, since it should just be dumping to memory.
So, am I misunderstanding the way these features are supposed to work, or is something strange going on?