4

How can I monitor the write speed to a tmpfs partition. It does not appear in the output of vmstat or iostat.

I am running several python processes that are writing heavily to tmpfs and contributing to load. The load is high but the CPU, memory, disk IO, etc is all nominal. The load seems to be taking this heavy tmpfs IO into account indirectly somehow. I'd love to know the write speed to have an idea of upper limits per host. I'm running blind any help would be appreciated.

CarpeNoctem
  • 2,437
  • 4
  • 23
  • 32
  • It's not really IO since it is all in RAM and as Janne said correctly, no block-layer seems to be involved. It seems to me that it's more an allocator/pageing or locking problem. Check the percentage values in the "Cpu(s)" line of top (3rd line). One of these percentages should have a high value. – AndreasM Mar 06 '12 at 12:32
  • @AndreasM the software interrupts on one CPU is at ~35%. IOwait and all other lines are quite low. Is that level of soft ints enough to cause this load? – CarpeNoctem Mar 08 '12 at 05:46
  • yes I think so. I'm not a kernel developer, but I guess the same interrupt handler is (probably) difficult to execute several times in parallel, that would require very fine locking IMHO. I suspect your python processes are communicating/sharing data via tmpfs and are hitting a limit. Maybe you can contact the developer(s) of that application about it to find other ways (shared mem, some sort of message queue etc) – AndreasM Mar 08 '12 at 08:22
  • Actually the softirq handling in Linux has very sophisticated locking capabilities and multicore friendly handling, see http://www.wil.cx/matthew/lca2003/paper.pdf But it could still be that there is some sort of contention here. – AndreasM Mar 08 '12 at 17:13

2 Answers2

2

tmpfs is not a block device, so ordinary I/O monitoring tools are no good for you.

One way to monitor write speed would be to use pv command. pv, Pipe Viewer allows you to see statistics about current process in situations you normally would be blackboxed, such as during compressing a huge log file or creating a tar ball.

Typical use cases of pv include stuff like this:

pv /path/to/your/log | gzip >/logarchivedir/log.gz
tar cvfz - /your/directory | pv >/outputdir/yourdir.tar.gz

I hope this helps you; you didn't tell us anything detailed.

Janne Pikkarainen
  • 31,852
  • 4
  • 58
  • 81
  • I can't really modify the program or pipe the output in anyway, so I'm not sure this would be useful. This is a good tool to keep in mind though so thanks! – CarpeNoctem Mar 06 '12 at 10:22
  • Running PV adds its own bottlenecks, usually invisible at disk speeds, but they will be significant at RAM speeds. Beware. – Marcin Mar 06 '12 at 13:59
  • Why would tools such as `iotop` not work with non-block devices? This seems to be indeed correct; I mounted `tmpfs` on a folder, created a big file with `dd`, but `iotop` did not measure at all the writing speed, a rate of about 2 GB/sec. – Adama Jun 27 '17 at 06:49
0

I had the same monitoring desire - it occurs to me /dev/shm could be mounted to a loop device (ie. /dev/loop0) and that loop device can be monitored [1]. The problem is a loop device would need a static file that is formatted with it's own filesystem and that defeats the point of the speed with tmpfs.

But note while tmpfs cannot be easily monitored, /dev/ram0 etc. can be by default.

Perhaps a linux expert can comment if tmpfs can be mounted to a loop device directly, I am uncertain.

[1] iostat does not seem to report statistics on loop devices (tested on linux 3.14.27/fedora 19).

Juan
  • 161
  • 6
ck_
  • 459
  • 1
  • 7
  • 20