1

Is there a theoretical estimation for the time needed for a packet to travel from point A to point B? For example 100 ms for every 100km ? I want to estimate the difference of time needed for a device to send a request to a server from different locations.

user3235881
  • 113
  • 2
  • 3
    The case of the [500 mile e-mail](http://www.ibiblio.org/harris/500milemail.html) is a bit of fun background to such questions. Also, normally what matters in latency is the roundtrip time, as in most cases you want to receive an acknowledgement as well. – HBruijn Sep 22 '16 at 13:48
  • @HBruijn hell, that is hilarious! – Broco Sep 22 '16 at 13:53

3 Answers3

3

The latency depends on:

  • type of transport medium (glass, copper, air)
  • number and type of devices you have on the path
  • saturation of the network

You could calculate the minimum latency for a certain distance for a specific communication medium, but it would be difficult to predict saturation or to know the latency of an equipment that you do not have access to.

Mircea Vutcovici
  • 17,619
  • 4
  • 56
  • 83
3

This is nearly impossible to achieve because there are many things that influence the time a request takes.

  1. Physical boundaries (aka the medium): E.g. The speed electromagnetic wave propagation itself in copper is close the speed of light in theory but the signal can be disturbed by electromagnetic waves, cable quality, distance, resistance etc. which can result in the situation that packets have to be sent again
  2. Number of Nodes in between: Depending on how many nodes exist between your source and your destination the time increases, as every node has to process the packets (e.g. route them, reduce ttl, etc.)
  3. Computing time of nodes: You can't know how fast a node processes a packet, also it depends on the current load how fast it can forward your packet
  4. You can't determine which route your packets will take: you can't be certain that if you send 2 packets over the internet they will take the same route since simply put the nodes always forward packets to the fastest available other node on the route. If a node in between goes down your packets are rerouted over another node.

You can however use tools like traceroute or ping to have an ok approximation of the time it takes. E.g. you could send a ping every 2 seconds and calculate an average from the last 10 measurements.

Basically every time critical service I know uses just pings (especially video games etc.).

Broco
  • 1,999
  • 13
  • 21
  • Thank you for the answer. I think things that influence the internet an routes are much different if for example you are 300 km far away from the place you are supposed to be. I also found a very interesting paper maybe I will found an answer there https://sce.carleton.ca/~abdou/CPV_TDSC.pdf . Thanks anyway – user3235881 Sep 22 '16 at 17:21
1

The latency is bounded by the speed of light. Usually packets travel in fiber which allows for about two thirds of the speed of light.

tex
  • 889
  • 1
  • 9
  • 19