I prefer to use cd /dev; iostat -xk 3 sd? fio?
to watch disk IO. Take a look at this sample excerpt:
avg-cpu: %user %nice %system %iowait %steal %idle
1.20 0.00 4.58 0.00 0.00 94.22
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util
sdg 0.00 0.00 6.67 238.00 3413.33 39774.67 353.04 0.25 1.02 0.37 9.17
sda 0.00 0.00 5.33 3570.67 2730.67 42230.50 25.15 0.44 0.12 0.07 25.20
sdc 0.00 0.00 10.33 795.00 3089.33 44510.00 118.21 0.40 0.47 0.16 12.83
sdf 0.00 0.00 6.67 254.67 3413.33 40318.67 334.68 0.24 0.93 0.35 9.07
sdh 0.00 0.00 14.33 338.00 3444.00 43286.67 265.26 0.27 0.78 0.29 10.23
sdi 0.00 0.00 8.67 906.33 4437.33 44533.17 107.04 0.36 0.40 0.15 14.17
sdb 0.00 0.00 4.67 2355.33 2389.33 44427.50 39.68 0.51 0.21 0.08 18.87
sdd 0.00 0.00 7.00 256.00 3414.67 40434.67 333.46 0.32 1.22 0.37 9.60
sde 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
fioa 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Your average IOPS for this interval is the sum of r/s
and w/s
and your avgrq-sz
(average request size in sectors) gives you an idea of whether the workload is random or sequential.
Take a look at sdg
vs. sda
in the example above. Both are writing around 40MBps to disk but the request size is much lower for sda (random workload) resulting in a higher IOPS.
If you want to track IOPS (and other performance) for an extended period of time I strongly suggest using nmon to collect the data and generate pretty graphs.