TL;DR
Is there any edge case where it will behave incorrect?
Yes, a couple.
- The two are equivalent only for timestamps in 1970 and later; for 1969 and earlier they give different results.
- The result of the former depends on time zone, that of the latter not, which gives differences in some cases.
The 1970 limit
Your current version, setting seconds and nanoseconds to 0, is rounding down (towards the beginning of time). The optimized version with division and multiplication is rounding towards zero. In this case “zero” it the epoch of the first moment of January 1, 1970 in UTC.
long exampleTimestamp = Instant.parse("1969-12-15T21:34:56.789Z").toEpochMilli();
long with0Seconds = Instant.ofEpochMilli(exampleTimestamp)
.atZone(ZoneId.systemDefault())
.withNano(0)
.withSecond(0)
.toInstant()
.toEpochMilli();
System.out.println("Set seconds to 0: " + with0Seconds);
long dividedAndMultiplied = exampleTimestamp / 1000 / 60 * 1000 * 60;
System.out.println("Divided and multiplied: " + dividedAndMultiplied);
Output from this snippet is (in my time zone and most time zones):
Set seconds to 0: -1391160000
Divided and multiplied: -1391100000
There is a difference of 60 000 milliseconds, a full minute, between the two outputs.
Dependency on time zone
You may have an issue with the definition of what it means to remove seconds. Seconds have not always been the same in all time zones. For example:
ZoneId zone = ZoneId.of("Asia/Kuala_Lumpur");
ZonedDateTime exampleTime = ZonedDateTime.of(1905, 5, 15, 10, 34, 56, 789_000_000, zone);
// Truncation in time zone
long longTzTimestamp = exampleTime.truncatedTo(ChronoUnit.MINUTES)
.toInstant()
.toEpochMilli();
System.out.println("After truncation in " + zone + ": " + longTzTimestamp);
// Truncation in UTC
long longUtcTimestamp = exampleTime.toInstant()
.truncatedTo(ChronoUnit.MINUTES)
.toEpochMilli();
System.out.println("After truncation in UTC: " + longUtcTimestamp);
After truncation in Asia/Kuala_Lumpur: -2039631685000
After truncation in UTC: -2039631660000
There is a difference of 25 seconds (25 000 milliseconds) between the two timestamps. The only difference I made was the order of two operations: truncation to whole minutes and conversion to UTC. How come the result is different? Until June 1, 1905 Malaysia was at offset +06:55:25 from GMT. So when the second of minute was 56 point something in Malaysia, it was 31 point something in GMT. So we are not removing the same number of seconds in both cases.
Again, I don’t think this will be a problem for timestamps after 1973. Nowadays time zones tend to use offsets that are a whole number of minutes from UTC.
Edit:
(Does this ever happen after 1970?)
A bit. For example Liberia was on offset -0:44:30 until January 6, 1972. And it’s anybody’s guess what politicians in some country decide next year or the year after that.
Checking for edge cases
One way to check whether you are hitting one of the cases mentioned in the foregoing is using assert
:
public static long toMinuteResolution(long timestamp) {
assert timestamp >= 0 : "This optimized method doesn’t work for negative timestamps.";
assert Duration.ofSeconds(Instant.ofEpochMilli(timestamp).atZone(ZoneId.systemDefault()).getOffset().getTotalSeconds())
.toSecondsPart() == 0
: "This optimized method doesn’t work for an offset of "
+ Instant.ofEpochMilli(timestamp).atZone(ZoneId.systemDefault()).getOffset();
return TimeUnit.MINUTES.toMillis(TimeUnit.MILLISECONDS.toMinutes(timestamp));
}
Since you wanted to optimize, I expect these checks to be too expensive for your production environment. You know better then I whether enabling them in your test environments will give you some assurance.
Further suggestions
As Andreas said in the comments, the truncatedTo
method makes the non-optimized version a bit simpler and clearer:
public static long toMinuteResolution(long timestamp) {
return Instant.ofEpochMilli(timestamp)
.atZone(ZoneId.systemDefault())
.truncatedTo(ChronoUnit.MINUTES)
.toInstant()
.toEpochMilli();
}
You can use truncatedTo
directly on the Instant
too if you want, as in Andreas’ comment.
If you want to go with your optimization anyway, for slightly better readability my optimized version would be:
private static final long MILLIS_PER_MINUTE = TimeUnit.MINUTES.toMillis(1);
public static long toMinuteResolution(long timestamp) {
return timestamp / MILLIS_PER_MINUTE * MILLIS_PER_MINUTE;
}
I might even try the following and see whether it is efficient enough. I expect no noticeable difference.
public static long toMinuteResolution(long timestamp) {
return TimeUnit.MINUTES.toMillis(TimeUnit.MILLISECONDS.toMinutes(timestamp));
}
Link
Time Changes in Monrovia Over the Years