-1

Seek Time : The amount of time required to move the read/write head from its current position to desired track.

I am looking for formula of average seek time used in disk scheduling algorithms.

  • This was already answered here : https://stackoverflow.com/questions/41767414/how-is-average-seek-time-calculated – Matias Barrios Nov 23 '18 at 18:53
  • Possible duplicate of [How is Average Seek Time Calculated?](https://stackoverflow.com/questions/41767414/how-is-average-seek-time-calculated) – Matias Barrios Nov 23 '18 at 18:53
  • Here i am looking for the formula which we use in disk scheduling algorithms, this is not a duplicate question. – Naresh kumar Nov 23 '18 at 19:25

1 Answers1

0

How to find average seek time in disk scheduling algorithms? I am looking for formula of average seek time used in disk scheduling algorithms.

The first step is to determine the geography of the device itself. This is difficult. Modern hard disks can not defined by the old "cylinders, heads, sectors" triplet, the number of sectors per track is different for different tracks (more sectors on outer tracks where the circumference is greater, less sectors on inner tracks where the circumference is smaller), and all of the information you can get about the drive (from the device itself, or from any firmware or OS API) is a lie to make legacy software happy.

To work around that you need to resort to "benchmarking tactics". Specifically, read from LBA sector 0 then LBA sector 1 and measure the time it took (to establish a "time taken when both sectors are in the same track" assumption), then read from LBA sector 0 then LBA sector N in a loop (with N starting at 2 and increasing) while measure the time it takes and comparing it to the previous value and looking for a larger increase in time taken that indicates that you've found the boundary between "track 0" and "track 1". Then repeat this (starting with the first sector in "track 1") to find the boundary between "track 1" and "track 2"; and keep repeating this to build an array of "how many sectors on each track". Note that it is not this simple - there's various pitfalls (e.g. larger physical sectors than logical sectors, sectors that are interleaved on the track, bad block replacement, internal caches built into the disk drive, etc) that need to be taken into account. Of course this will be excessively time consuming (e.g. you don't want to do this for every disk every time an OS boots), so you'll want to obtain the hard disk's identification (manufacturer and model number) and store the auto-detected geometry somewhere, so that you can skip the auto-detection if the geometry for that model of disk was previously stored.

The next step is to use the information about the real geometry (and not the fake geometry) combined with more "benchmarking tactics" to determine performance characteristics. Ideally you'd be trying to find constants for a formula like expected_time = sector_read_time + rotational_latency + distance_between_tracks * head_travel_time + head_settle_time, which could be done like:

  • measure time to read from first sector in first track then sector N in the first track; for every value of N (for every sector in the first track) and find the minimum time it can take, divide it by 2 and call it sector_read_time.
  • measure time to read from first sector in first track then sector N in the first track; for every value of N (for every sector in the first track) and find the maximum time it can take, divide it by the number of sectors in the first track, and call it rotational_latency.
  • measure time to read from first sector in track N then first sector in track N+1, with N ranging from 0 to "max_track - 1", and determine the average, and call it time0
  • measure time to read from first sector in track N then first sector in track N+2, with N ranging from 0 to "max_track - 1", and determine the average, and call it time1
  • assume head_travel_time = time1 - time0
  • assume head_settle_time = time0 - head_travel_time - sector_read_time

Note that there are various pitfalls with this too (same as before), and (if you work around them) the best you can hope for is a generic estimate (and not an accurate predictor).

Of course this will also be excessively time consuming, and if you're storing the auto-detected geometry somewhere it'd be a good idea to also store the auto-detected performance characteristics in the same place; so that you can skip all of the auto-detection if all of the information for that model of disk was previously stored.

Note that all of the above assumes "stand alone rotating platter hard disk with no caching and no hybrid/flash layer" and will be completely useless for a lot of cases. For some of the other cases (SSD, CD/DVD) you'd need different techniques to auto-detect their geometry and/or characteristics. Then there's things like RAID and virtualisation to complicate things more.

Mostly; it's far too much hassle to bother in practice.

Instead; just assume that cost = abs(previous_LBA_sector_number - next_LBA_sector_number) and/or let the hard disk sort out the optimum order itself (e.g. using Native Command Queuing - see https://en.wikipedia.org/wiki/Native_Command_Queuing ).

Brendan
  • 35,656
  • 2
  • 39
  • 66