I am implementing real time linear interpolation of audio data, which is stored in interleaved audio buffers. Audio files can be single- or multichannel. In case of single channel audio files, I interpolate as follows:
f_dex = offset + ((position / oldlength) * (newlength * b_channelcount));
i_dex = trunc(f_dex); // get truncated index
fraction = f_dex - i_dex; // calculate fraction value for interpolation
b_read = (b_sample[i_dex] + fraction * (b_sample[i_dex + b_channelcount] - b_sample[i_dex]));
outsample_left += b_read;
outsample_right += b_read;
This sounds wonderful and I'm not having any issues. However, when I want to read multichannel files, I must correct the calculated read position to make sure it is on the first sample in the corresponding frame, such as:
f_dex = offset + ((position / oldlength) * (newlength * b_channelcount));
if ((long)trunc(f_dex) % 2) {
f_dex -= 1.0;
}
i_dex = trunc(f_dex); // get truncated index
fraction = f_dex - i_dex; // calculate fraction value for interpolation
outsample_left += (b_sample[i_dex] + fraction * (b_sample[i_dex + b_channelcount] - b_sample[i_dex])) * w_read;
outsample_right += (b_sample[i_dex + 1] + fraction * (b_sample[(i_dex + 1) + b_channelcount] - b_sample[i_dex + 1])) * w_read;
Now this introduces some digital noise and I can't really explain why. Is there any other/better way to apply real time linear interpolation to interleaved stereo files?