I'm trying to implement simple delay/reverb described in this post https://stackoverflow.com/a/5319085/1562784 and I have a problem. On windows where I record 16bit/16khz samples and get 8k samples per recording callback call, it works fine. But on linux I get much smaller chunks from soundcard. Something around 150 samples. Because of that I modified delay/reverb code to buffer samples:
#define REVERB_BUFFER_LEN 8000
static void reverb( int16_t* Buffer, int N)
{
int i;
float decay = 0.5f;
static int16_t sampleBuffer[REVERB_BUFFER_LEN] = {0};
//Make room at the end of buffer to append new samples
for (i = 0; i < REVERB_BUFFER_LEN - N; i++)
sampleBuffer[ i ] = sampleBuffer[ i + N ] ;
//copy new chunk of audio samples at the end of buffer
for (i = 0; i < N; i++)
sampleBuffer[REVERB_BUFFER_LEN - N + i ] = Buffer[ i ] ;
//perform effect
for (i = 0; i < REVERB_BUFFER_LEN - 1600; i++)
{
sampleBuffer[i + 1600] += (int16_t)((float)sampleBuffer[i] * decay);
}
//copy output sample
for (i = 0; i < N; i++)
Buffer[ i ] = sampleBuffer[REVERB_BUFFER_LEN - N + i ];
}
This results in white noise on output, so clearly I'm doing something wrong. On linux, I record in 16bit/16khz, same like on Windows and I'm running linux in VMWare.
Thank you!
Update:
As indicated in answered post, I was 'reverbing' old samples over and over again. Simple 'if' sovled a problem:
for (i = 0; i < REVERB_BUFFER_LEN - 1600; i++)
{
if((i + 1600) >= REVERB_BUFFER_LEN - N)
sampleBuffer[i + 1600] += (int16_t)((float)sampleBuffer[i] * decay);
}