Reading frames from FFMPEG and try to directly draw on a surface window in Native android, If scale the image to exact size what we are getting from the camera and apply YUV420P to RGBA it take 1ms
to scale through av_image_fill_arrays
but if scale image surface size, then It took 25 to 30ms
to scale a same frame.
Low latency:[~1ms by sws_scale]
swsContext = sws_getContext(videoCodecContext->width,
videoCodecContext->height,
videoCodecContext->pix_fmt,
videoCodecContext->width,
videoCodecContext->height,
AV_PIX_FMT_RGB0,
SWS_FAST_BILINEAR, NULL, NULL, NULL);
av_image_fill_arrays()
av_read_frame()
avcodec_decode_video2(
sws_scale(swsContext,
(const uint8_t *const *) videoFrame->data,
videoFrame->linesize,
0,
videoCodecContext->height,
pictureFrame->data,
pictureFrame->linesize);
ANativeWindow_lock()
Write all buffer bytes to window.
ANativeWindow_unlockAndPost()
Low latency:[~30ms by sws_scale]
[videoContext Width: 848 Height: 608]
swsContext = sws_getContext(videoCodecContext->width,
videoCodecContext->height,
videoCodecContext->pix_fmt,
1080,
608,
AV_PIX_FMT_RGB0,
SWS_FAST_BILINEAR, NULL, NULL, NULL);
NOTE: we are getting YUV420P pixel format.
Whenever we change context width and height other than videoContext it will take more than 30ms so result is video delaying.
TRY 1: Pass the buffer from JNI to Java and create bitmap there to scale later on but createBitmap itself take ~500ms
which is not good enough.
TRY 2: Direct YUV420P to RGB conversion, but still sws_scale
is greater.
TRY 3: Direct writes YUV to window bites, but it shows without color (If anyone has a solution here will might helpful).
TRY 4: yuvlib
, still not worth it.
TRY 5: Different pixel formats and flags in swsContext.
Any help would appreciated.