When play the lpcm data decoded from ffmpeg with audioqueue, the elapsed time got by AudioQueueGetCurrentTime
exceeds the duration of media. But when decode the same media with AVFoundation framework, the elapsed time equals duration of the media, and so when read media by ffmpeg with no decoded, then send the compressed media data to audioqueue, the elapsed time also equals duration of the media. The AudioStreamBasicDescription set as following:
asbd.mSampleRate = 44100;
asbd.mFormatID = kAudioFormatLinearPCM;
asbd.mFormatFlags = kAudioFormatFlagsCanonical;
asbd.mBytesPerPacket = 4;
asbd.mFramesPerPacket = 1;
asbd.mBytesPerFrame = 4;
asbd.mChannelsPerFrame = 2;
asbd.mBitsPerChannel = 16;
asbd.mReserved = 0;
When playing with data decoded from AVFoundation, the setting of AudioStreamBasicDescription is the same as above. By my test found that AudioTimeStamp.mSampleTime
got by AudioQueueGetCurrentTime
is different between ffmpeg and AVFoundation, the value of ffmpeg is greater than AVFoundation. So I want to know how this happen, and how to fix it?
Thanks!