0

I received the following data by "pinging" berkeley.edu by varying the packet size: For 100 bytes: 24 packets transmitted, 24 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 91.974/94.269/97.487/1.353 ms For 200 bytes: 26 packets transmitted, 26 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 92.730/97.980/119.909/6.525 ms For 300 bytes: 26 packets transmitted, 26 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 92.136/97.066/126.481/6.382 ms

Although I know that the formula for transmission delay is given by L/R, where L is the total data in bits and R is the bandwidth, I was wondering if I could estimate the transmission delay by using the average times given above and also by varying data size?

comsfollower
  • 35
  • 13
  • Ping is a tool to check network connectivity, and using it to measure latency is foolish. It will not give accurate measurements since ICMP is low-priority and most likely to be queued and/or dropped in it journey from one end to the other. – Ron Maupin Feb 11 '16 at 15:40
  • But for a quick estimation how would you use that data to find the delay? – comsfollower Feb 11 '16 at 16:20
  • Those data are not actually very useful for doing what you want since they were derived using ping. You should use a better tool such a IP SLA which will give you much more accurate data. Ping times can vary widely on the same path from one test to the other, and it was never designed for how you want to use it. You have minimum and maximum times which are very different (> 20%, suggesting congestion in the path), and they will probably be very different from times of actual data transferred using TCP or UDP. The real latency may only be something like 40 ms. – Ron Maupin Feb 11 '16 at 16:32

0 Answers0