-1

When copying data from hard disk sdc to sda, I notice that the number of read requests completed per second was unusually low:

$ iostat -x 1 1
Linux 3.13.0-32-generic (melancholy)    2014-08-15      _x86_64_        (4 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           3.15    0.00    0.94    1.91    0.00   94.00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.15     6.43    0.37    4.85    14.17  2154.46   829.68     1.35  258.80   26.50  276.74   2.89   1.51
sdb               0.02     1.08    0.63    1.91    10.48    86.95    76.56     0.13   50.08    4.89   65.06   2.98   0.76
sdc               0.35     1.10   29.15    0.18  2140.15     5.11   146.32     0.29    9.98    9.39  107.21   2.12   6.22

Digging further, it seems that each time that I start iostat that the first line report very lot r/s:

$ sudo iostat -x 1 3
Linux 3.13.0-32-generic (melancholy)    2014-08-15      _x86_64_        (4 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           3.15    0.00    0.94    1.95    0.00   93.96

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.15     6.43    0.38    4.98    14.29  2219.15   832.86     1.39  259.93   25.96  277.93   2.89   1.55
sdb               0.02     1.08    0.63    1.91    10.47    86.84    76.55     0.13   50.06    4.89   65.02   2.98   0.76
sdc               0.35     1.10   29.91    0.18  2206.09     5.11   146.98     0.30   10.00    9.43  107.21   2.12   6.37

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.00    0.00    2.01   24.56    0.00   72.43

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     5.00    0.00    2.00     0.00    28.00    28.00     0.03   14.00    0.00   14.00  14.00   2.80
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdc               0.00     0.00  498.00    0.00 42496.00     0.00   170.67     5.35   10.74   10.74    0.00   2.01 100.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.50    0.00    2.75   40.75    0.00   56.00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     0.00    0.00  274.00     0.00 139648.00  1019.33   114.82  304.88    0.00  304.88   2.93  80.40
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdc               0.00     0.00  500.00    0.00 42632.00     0.00   170.53     5.34   10.70   10.70    0.00   2.00 100.00

$ sudo iostat -x 1 3
Linux 3.13.0-32-generic (melancholy)    2014-08-15      _x86_64_        (4 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           3.15    0.00    0.94    1.95    0.00   93.96

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.15     6.43    0.38    4.99    14.29  2223.10   833.13     1.40  260.11   26.00  278.09   2.89   1.55
sdb               0.02     1.08    0.63    1.91    10.46    86.84    76.55     0.13   50.06    4.89   65.02   2.98   0.76
sdc               0.35     1.10   29.94    0.18  2208.18     5.11   147.00     0.30   10.00    9.43  107.21   2.12   6.37

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.00    0.00    2.00   24.50    0.00   72.50

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     7.00    0.00    2.00     0.00    36.00    36.00     0.02   10.00    0.00   10.00  10.00   2.00
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdc               0.00     0.00  496.00    0.00 42360.00     0.00   170.81     5.27   10.60   10.60    0.00   2.02 100.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.50    0.00    2.26   23.81    0.00   73.43

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdc               0.00     0.00  499.00    0.00 42616.00     0.00   170.81     5.29   10.61   10.61    0.00   2.00 100.00

$ sudo iostat -x 1 3
Linux 3.13.0-32-generic (melancholy)    2014-08-15      _x86_64_        (4 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           3.15    0.00    0.94    1.96    0.00   93.96

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.15     6.43    0.38    4.99    14.29  2223.99   833.18     1.40  260.12   26.00  278.10   2.89   1.55
sdb               0.02     1.08    0.63    1.91    10.46    86.83    76.55     0.13   50.06    4.89   65.02   2.98   0.76
sdc               0.35     1.10   29.97    0.18  2210.82     5.11   147.03     0.30   10.00    9.43  107.21   2.12   6.38

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.25    0.00    2.01   26.07    0.00   70.68

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     5.00    0.00    3.00     0.00    32.00    21.33     0.10   34.67    0.00   34.67  34.67  10.40
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdc               0.00     0.00  499.00    0.00 42616.00     0.00   170.81     5.38   10.77   10.77    0.00   2.00 100.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.50    0.00    2.49   25.44    0.00   71.57

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     0.00    0.00  175.00     0.00 86040.00   983.31    40.06  228.89    0.00  228.89   2.77  48.40
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdc               0.00     0.00  500.00    0.00 42632.00     0.00   170.53     5.41   10.82   10.82    0.00   2.00 100.00

This is problematic as it means that I cannot automate monitoring of iostat as the initial reading will be wrong. I could work around it by having the monitoring scripts run iostat -x 1 2 and ignoring the first output, but I would really like to understand why this is necessary. Why is the first report for each run showing such low activity?

To clarify, I am referring to the r/s value for sdc, when data (hundreds of GiB) is being copied off that disk onto sda. In each run of iostat, the first value of r/s for sdc is ~29, whereas each subsequent value of r/s for sdc is closer to 500. Why is that?

dotancohen
  • 2,590
  • 2
  • 25
  • 39

2 Answers2

2

Citing the iostat manpage:

The first report generated by the iostat command provides statistics concerning the time since the system was booted, unless the -y option is used, when this first report is omitted. Each subsequent report covers the time since the previous report.

This means that the first read will show a low value because it is the average since the boot, which is likely low. Using -y will drop this initial read.

Sven
  • 98,649
  • 14
  • 180
  • 226
0

Try to use cat /proc/diskstats as it more predictable. If you automate monitoring with zabbix or nagios you can always calculate diffirence between previous value and new one.

If i remember correctly iostat also use /proc/diskstats.

Navern
  • 1,619
  • 1
  • 10
  • 14
  • That you, but I don't see how `diskstats` is relevant. By the way, it wasn't me who downvoted. – dotancohen Aug 15 '14 at 21:44
  • No problem, it's my mistake that i answered not specific question. /proc/diskstats relevant because iostat takes info about disks from there. Quote from manapge of iostat: FILES /proc/stat contains system statistics. /proc/partitions contains disk statistics (for pre 2.5 kernels that have been patched). /proc/diskstats contains disks statistics (for post 2.5 kernels). /sys contains statistics for block devices (post 2.5 kernels). http://linuxcommand.org/man_pages/iostat1.html – Navern Aug 18 '14 at 11:32
  • Thanks. I've since read the manpage, but it's good to reiterate. – dotancohen Aug 18 '14 at 11:57