I have set up an iOS broadcast extension and audio data is coming in through processSampleBuffer
as a CMSampleBuffer
.
I am sending this data through a Websocket connection to a Pion WebRTC sink, configured with MimeType: webrtc.MimeTypeOpus
.
From the looks of it, the audio generated (RPSampleBufferType.audioApp
) has 2 channels at a 44100 samplerate.
How do I convert the audio data to be suitable for playing through a WebRTC Opus audio stream?
I had some luck converting it to a PCMA audio stream by using a AudioConverterNew
and
AudioStreamBasicDescription(
mSampleRate: 8000.00,
mFormatID: kAudioFormatLinearPCM,
mFormatFlags: 0,
mBytesPerPacket: 1,
mFramesPerPacket: 1,
mBytesPerFrame: 1,
mChannelsPerFrame: 1,
mBitsPerChannel: 8,
mReserved: 0
)
but the audio mostly sounds garbled.