1

I have this EZAudio method in my Swift project, to capture audio from the microphone:

func microphone(microphone: EZMicrophone!, hasAudioReceived bufferList: UnsafeMutablePointer<UnsafeMutablePointer<Float>>, withBufferSize bufferSize: UInt32, withNumberOfChannels numberOfChannels: UInt32) {

}

But what I really need is to have that "bufferList" parameter coming in as an AudioBufferList type, in order to send those audio packets through a socket, just like I did in Objective C:

//Objective C pseudocode:
for(int i = 0; i < bufferList.mNumberBuffers; ++i){
   AudioBuffer buffer = bufferList.mBuffers[i];
   audio = ["audio": NSData(bytes: buffer.mData, length: Int(buffer.mDataByteSize))];
   socket.emit("message", audio);
}

How can I convert that UnsafeMutablePointer> variable into AudioBufferList?

Josh
  • 6,251
  • 2
  • 46
  • 73
  • do you have any new update for this, i am trying to achieve the same , i need to send nsdata to socket i am using cocoasyncsocket and ezaudio – Muhammad Faizan Khatri Apr 21 '17 at 08:00
  • Hi @MuhammadFaizanKhatri , I got the thing working some time ago, the code is pasted below as an answer. Upvote if it works! – Josh Apr 25 '17 at 07:19

2 Answers2

1

I was able to stream audio from the Microphone, into a socket, like this:

func microphone(microphone: EZMicrophone!, hasBufferList bufferList: UnsafeMutablePointer<AudioBufferList>, withBufferSize bufferSize: UInt32, withNumberOfChannels numberOfChannels: UInt32) {
        let blist:AudioBufferList=bufferList[0]
        let buffer:AudioBuffer = blist.mBuffers
        let audio = ["audio": NSData(bytes: buffer.mData, length: Int(buffer.mDataByteSize))];
        socket.emit("message", audio);//this socket comes from Foundation framework
    }

This general AudioStreamDescriptor setup worked for me, you might have to tweak it for your own needs or ommit some parts, like the waveform animation:

func initializeStreaming() {
        var streamDescription:AudioStreamBasicDescription=AudioStreamBasicDescription()
        streamDescription.mSampleRate       = 16000.0
        streamDescription.mFormatID         = kAudioFormatLinearPCM
        streamDescription.mFramesPerPacket  = 1
        streamDescription.mChannelsPerFrame = 1
        streamDescription.mBytesPerFrame    = streamDescription.mChannelsPerFrame * 2
        streamDescription.mBytesPerPacket   = streamDescription.mFramesPerPacket * streamDescription.mBytesPerFram
        streamDescription.mBitsPerChannel   = 16
        streamDescription.mFormatFlags      = kAudioFormatFlagIsSignedInteger
        microphone = EZMicrophone(microphoneDelegate: self, withAudioStreamBasicDescription: sstreamDescription, startsImmediately: false)
        waveview?.plotType=EZPlotType.Buffer
        waveview?.shouldFill = false
        waveview?.shouldMirror = false
    }

It was complicated to get this thing running, good luck!

Josh
  • 6,251
  • 2
  • 46
  • 73
  • i tried to setup in kAudioFormatLinearPCM Format, and it works fine, but i need to convert the format to ULaw , that configuration is giving me tough time, do you have any initialize config about it? – Muhammad Faizan Khatri Apr 25 '17 at 07:36
  • @MuhammadFaizanKhatri Have you tried using kAudioFormatULaw instead? In theory, there are many audio formats that you can use there, but in reality, I have discovered most format combinations crash or don't work properly in iOS. I used this configuration because all the others I tried crashed. – Josh Apr 25 '17 at 09:13
  • @MuhammadFaizanKhatri some people are trying to use ULaw here: http://stackoverflow.com/questions/8426967/ios-audio-unit-playback-with-constant-noise – Josh Apr 25 '17 at 09:14
0

I believe you would create a AudioBufferList pointer and use the result of the memory function.

let audioBufferList = UnsafePointer<AudioBufferList>(bufferList).memory 
Casey Fleser
  • 5,707
  • 1
  • 32
  • 43