You cannot know the future
You fundamentally cannot predict what future optimisers might or might not do in any language.
To look at the future, the best odds are for the OS itself to provide timing constant tests, that way they can be properly tested and used in all environments.
This has been ongoing for quite some time already. E.g. The timingsafe_bcmp() function in libc first appeared in OpenBSD 4.9. (Released in May 2011).
Obviously programming environments need to pick these up and/or provide their own functions that they guarantee will not be optimised away.
Inspect assembly code
There is some discussion of optimisers here. It's C (and C++) minded, but it's really language independent that you can only look at what current optimisers can do, not what future optimisers might do.
Anyway they rightfully recommend to check the assembly code to learn what your optimiser does.
For java that's not necessarily as "easy" as for C or C++ given it's nature, but it should not be impossible for specific security functions to actually do that effort for current environments.
Avoidance might be possible
You could try to avoid the timing attack.
E.g.:
Although intuitively the addition of random time might seem the ting to do, it won't work: the attackers is already using statistical analysis in timing attacks, you merely add some more noise.
https://security.stackexchange.com/questions/96489/can-i-prevent-timing-attacks-with-random-delays
Still: it does not mean that you cannot make a time constant implementation if your application can be slow enough. i.e.: wait long enough. E.g. you could wait for a timer to go off and only then continue processing the result of the comparison avoiding the timing attack anyway.
Detection
It should be possible to write detection of timing attack vulnerability into applications using an implementation of timing constant comparisons.
Ether:
- some test that is run during initialisation
- the same test regularly as a part of normal operations.
Again the optimiser is going to be tricky to deal with as it can (and sometimes will) even change the order of execution of things. But e.g. using inputs that the program does not have in its code (e.g. an external file), and running it twice: once with a normal compare and identical strings, once with completely different strings (xored e.g.) and then again with those inputs but with a constant time compare. You now have 4 timings: the normal compare should not be the same, the constant time compare should be slower and the same. If it fails: warn the user/maintainer of the application the constant time stuff is likely broken in production usage.
- A theoretical option is to collect actual timings yourself (record fail/success as well) and statistically analyse them yourself. But it would be tricky to perform in practice as your measurements would need to be extremely accurate as you cannot loop it a few million times, you're dealing with measuring just one comparison and won't have the resolution to measure it accurately enough ... .