0

I'm trying to encode an audio stream to a file. I'm receiving the audio buffers and using avcodec_fill_audio_frame to create an AVFrame and send it to avcodec_encode_audio2 (with some other thing in between - I`m using ffmpeg muxing.c as an example)

The thing is: The audio buffer I'm getting contains 480 samples, the codec I'm using has a frame_size of 1152 (MP2). The result is that the output audio file is kinda "chopped". It sounds like every audio frame has some silent samples in the end.

How can I fix this? Thanks!!

1 Answers1

0

You must fill the AVFrame with up to frame_size (1152 in your case) samples before you pass the frame to avcodec_encode_audio2.

It sounds like your frame is too small. How are you setting it up? Can you show some code?

Rhythmic Fistman
  • 34,352
  • 5
  • 87
  • 159
  • I'm receiving it from javascript WebRTC and using Google's Native Client to encode it. In the C++ side, I use pp::MediaStreamAudioTrack ([link](https://developer.chrome.com/native-client/pepper_dev/cpp/classpp_1_1_media_stream_audio_track)) to handle the stream. Then I can get the buffer and configure it. These are the fields I can configure: PP_MEDIASTREAMAUDIOTRACK_ATTRIB_BUFFERS, PP_MEDIASTREAMAUDIOTRACK_ATTRIB_SAMPLE_RATE, PP_MEDIASTREAMAUDIOTRACK_ATTRIB_SAMPLE_SIZE, PP_MEDIASTREAMAUDIOTRACK_ATTRIB_CHANNELS, PP_MEDIASTREAMAUDIOTRACK_ATTRIB_DURATION. Do you need more information? – Victor Canezin de Oliveira Nov 24 '15 at 12:36
  • Should I create an intermediary buffer to fill it until it's 1152 and, only then, send it to the encoder? If so, how can I create and fill this intermediary buffer? Thanks :) – Victor Canezin de Oliveira Nov 24 '15 at 15:11