3

I am using System.Time.Instant to determine how quickly a web address can respond to a simple request. My overall goal is to determine latency between my home computer and a service that I set up on a remote machine that is 200 miles away. Here's a simplified version of the code, which otherwise runs fine:

...//instantiated all necessary variables here....
Instant starts = Instant.now();
...
//Get information from web address
url = new URL(webAddress);
inputstream = url.openStream();
br = new BufferedReader(new InputStreamReader(inputstream));

//Wait for website to respond
while ((line = br.readLine()) != null) 
{
     instanceOutput += line;
}
Instant ends = Instant.now();
...
//Output the duration 
return ("time required to get result: " + Duration.between(starts, ends).toMillis());

This tends to produce results of 0 milliseconds, or 14-16 milliseconds, or occasionally longer periods that are roughly a multiple of 15 milliseconds. But, it's physically impossible for the latency to be 0 milliseconds, even if rounded down. The machines are 200 miles apart, so the signal must take at least 2 to 3 milliseconds each way, otherwise it would exceed the speed of light.

The only similar problem that I've been able to find is this one from way back in 2009, and the second answer suggests a reason (but for a different method), and the documentation link is broken: Timer accuracy in java, Also, it seems that Java may have updated the timekeeping since then. And, the claim that the system timer isn't updated regularly seems extraordinary and I can't find it repeated anywhere else.

Is this a good enough way to calculate duration to the nearest millisecond? From what I've recently read the nanosecond timer would be more accurate, but I don't need that amount of precision, and it would be bad form for me to fix a problem without trying to understand why the problem happened and learning from it. What am I doing wrong or misunderstanding?

user3685427
  • 211
  • 2
  • 8
  • You can look into LocalTime class in java – Nitishkumar Singh Apr 10 '18 at 02:33
  • 2
    Are you running Windows? The clock resolution is 15.6ms (at least it was on 7). – teppic Apr 10 '18 at 02:51
  • 3
    For measuring time intervals, you should look into System.nanoTime(). It was built specifically (and solely) for that purpose. I'm not sure if it'll give you any better resolution on Windows, though. – yshavit Apr 10 '18 at 03:16
  • 1
    It’s an interplay between the JVM and the OS and thus behaves differently in different constellations. You may get better precision (or is that accuracy?) with Java 9 than with Java 8. Can you measure several roundtrips rather than just one and then divide? – Ole V.V. Apr 10 '18 at 05:28
  • @teppic You mean Windows 7? Not Java 7?? A search on *Windows timer resolution* gave a couple of interesting hits. – Ole V.V. Apr 10 '18 at 05:55
  • @OleV.V. I meant Windows 7. I ran into the issue on windows 7, I don't remember what version of Java (it may well have been 7 by coincidence). – teppic Apr 10 '18 at 06:08
  • 1
    This bug is still open for Win 7/Java 9: [Clock.systemUTC has low resolution on Windows](https://bugs.openjdk.java.net/browse/JDK-8180466) – teppic Apr 10 '18 at 06:14
  • I was using Java 9, with Windows Server 2008 (or possibly Windows Server 2012, I can't remember exactly and I can't spin it up again at the moment). I oversimplified my question a bit to keep it simple; I'm using an x-large Google Cloud VM to test a handful of AWS EC2 VMs. I wrote the testing program on my home computer running Windows 10, which would explain how I didn't notice the issue when doing little test runs, before moving it to the Google server for an 18-hour testing session. – user3685427 Apr 10 '18 at 08:56

0 Answers0