So I know there is no standardized way of calculating IOPS for a HDD, but from everything I have read it appears one of the most accurate formulas is the following:
IOP/ms = + {rotational latency} + ({block size} / {data transfer rate})
Which is IOs per millisecond or what the book I've been reading calls "Disk Service Time". Also rotational latency is calculated as half of one rotation in milliseconds.
This was taken from the EMC book "Information Storage and Management" -arguably a pretty reliable source right\wrong?
Putting this formula into practice consider this Seagate data sheet.
I am going to calculate IOPS for the ST3000DM001 model for a block size of 4kb:
- Seek Average (Write) = 9.5 -I'll measuring IOPS for writes
- Spindle speed = 7200rpm
- Average Data Rate = 156MB/s
So my variables are:
- Seek Time = 9.5ms
- Rotational latency = (.5 / (7200rpm / 60)) = 0.004s = 4ms
- Data Rate = 156MB/s = (0.156MB/ms / 0.004MB) = 39
9.5ms + 4ms + 39 = IO/ms 52.5
1 / (52.5 * 0.001) = 19 IOPS
19 IOPS for this drive clearly is not right so what am I doing wrong?