I'm trying to implement an algorithm for symbol timing recovery. Those algorithms are constantly changing the sampling phase a bit, meaning they are not changing the sampling rate, but are shifting the samples in time. For example, if we first measured at the time 1, 3 and 5, we would next measure with a shift of 1 at the time 2, 4 and 6. So the rate does not change, but we have a shift or phase of the sampling. Is there a way to do this on audio data in Java?
An easy implementation would be to use a higher sampling rate than necessary and use just every 5th sample or so, but this does not yield the best results. The maximum sampling rate would be decreasing significantly, but I need a quite high sampling rate (>40kHz). I'll test this method, but maybe there is another one?