0

I'm working on a DSP project on Android which requires low latency audio I/O. For this reason, I'm using Oboe library. In the LiveEffect example, the synchronous recording and playback is demonstrated. However, for acoustic feedback neutralization, I need the other way around, that is to generate White Noise signal through a built-in speaker first, then record it using a mic. I tried to modify LiveEffect example using this asked question, i.e setting the recording stream as Master (callback) and using non-blocking write method for the playback stream. But I got the following error when I run my code on Pixel XL (Android 9.0):

D/AudioStreamInternalCapture_Client: processDataNow() wait for valid timestamps
D/AudioStreamInternalCapture_Client: advanceClientToMatchServerPosition() readN = 0, writeN = 384, offset = -384

    --------- beginning of crash
A/libc: Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x5800003f666c66 in tid 2852 (AAudio_1), pid 2796 (ac.oiinitialize) 

Here is my callback:

oboe::DataCallbackResult
AudioEngine::onAudioReady(oboe::AudioStream *oboeStream, void *audioData, int32_t numFrames) {

    assert(oboeStream == mRecordingStream);
    int32_t framesToWrite = mPlayStream->getFramesPerBurst();
    oscillator_->whiteNoise(framesToWrite); // write white noise into buffer;

    oboe::ResultWithValue<int32_t> result = mPlayStream->write(oscillator_->write(), framesToWrite, 0);
    // oscillator_->write() returns const void* buffer;
    if (result != oboe::Result::OK) {
        LOGE("input stream read error: %s", oboe::convertToText(result.error()));

        return oboe::DataCallbackResult ::Stop;
    }

    // add Adaptive Feedback Neutralization Algorithm here....

    return oboe::DataCallbackResult::Continue;
}   

Is my approach correct for generating a signal and then capturing it through a mic? If so, can anyone help me with this error? Thank you in advance.

Bek
  • 1
  • 4

1 Answers1

0

However, for acoustic feedback neutralization, I need the other way around, that is to generate White Noise signal through a built-in speaker first, then record it using a mic

You can still do this using an output stream callback and a non-blocking read on the input stream. This is the more common (and tested) way of doing synchronous I/O. A Larsen effect will work fine this way.

Your approach should still work, however, I'd stick to the LiveEffect way of setting up the streams since it works.

In terms of your error SIGSEGV usually means a null pointer dereference - are you starting your input stream before the output stream? This could meant you're attempting to write to the output stream which hasn't yet been opened.

donturner
  • 17,867
  • 8
  • 59
  • 81
  • Thank you, the error was due to the stream initialization. I modeled feedback path using an output stream callback and a non-blocking read on the input stream. But, it seems there is longer latency. – Bek Jan 16 '19 at 02:00
  • There should be no difference in latency as long as you: 1) set the following properties on the output stream `SharingMode::Exclusive`, `PerformanceMode::LowLatency` and bufferSizeInFrames=1*burstSize and 2) empty the input stream before starting to read from it, like this: https://github.com/google/oboe/blob/master/samples/LiveEffect/src/main/cpp/LiveEffectEngine.cpp#L311 – donturner Jan 16 '19 at 14:28