I'm decoding whatever the camera codec is and then always encode it to H264
and more specifically qsv
if it is supported. Currently having 2 cameras to test. One is H264
encoding and one is rawvideo
. The problem comes with rawvideo
. The pixel format is BGR24
and scaling to NV12
I will simplify the code because it is as any other example.
avcodec_send_packet()
// while
avcodec_receive_frame()
// if frame is not EAGAIN convert BGR24 to NV12
if (_pConvertContext == null)
{
_pConvertContext = CreateContext(sourcePixFmt, targePixFmt);
}
if (_convertedFrameBufferPtr == IntPtr.Zero)
{
int buffSize = ffmpeg.av_image_get_buffer_size(targePixFmt, sourceFrame->width, sourceFrame->height, 1);
_convertedFrameBufferPtr = Marshal.AllocHGlobal(buffSize);
ffmpeg.av_image_fill_arrays(ref _dstData, ref _dstLinesize, (byte*)_convertedFrameBufferPtr, targePixFmt, sourceFrame->width, sourceFrame->height, 1);
}
return ScaleImage(_pConvertContext, sourceFrame, targePixFmt, _dstData, _dstLinesize);
And ScaleImage method
ffmpeg.sws_scale(ctx, sourceFrame->data, sourceFrame->linesize, 0, sourceFrame->height, dstData, dstLinesize);
AVFrame* f = ffmpeg.av_frame_alloc();
var data = new byte_ptrArray8();
data.UpdateFrom(dstData);
var linesize = new int_array8();
linesize.UpdateFrom(dstLinesize);
f->data = data;
f->linesize = linesize;
f->width = sourceFrame->width;
f->height = sourceFrame->height;
f->format = (int)targePixelFormat;
return f;
After that sending the scaled frame to the encoder and receiving it and writing the file. After that I call av_frame_free(&frame)
on the frame returned from the method. But when I set breakpoint I can see the address of the frame is the same even after calling av_frame_alloc()
and cleaning everytime. And I think this is the reason for the memory leak. If I do deep clone of the f
before returning it everything is fine. Why is that happening while the same logic works fine with the other camera?