I'm trying to create a Camera2 CameraCaptureSession
that is capable of four outputs:
- On-screen preview (
SurfaceView
, up to 1080p) - Photo capture (
ImageReader
, up to 8k photos) - Video Capture (
MediaRecorder
/MediaCodec
, up to 4k videos) - Frame Processing (
ImageReader
, up to 4k video frames)
Unfortunately Camera2 does not support attaching all of those four outputs (Surfaces) at the same time, so I'm going to have to make a compromise.
The compromise that seemed most logical to me was to combine the two video capture pipelines into one, so that the Frame Processing output (#4, ImageReader
) redirects the frames into the Video Capture output (#3, MediaRecorder
).
How do I write the Images from the ImageReader:
val imageReader = ImageReader.newInstance(4000, 2256, ImageFormat.YUV_420_888, 3)
imageReader.setOnImageAvailableListener({ reader ->
val image = reader.acquireNextImage() ?: return@setOnImageAvailableListener
callback.onVideoFrameCaptured(image)
}, queue.handler)
val captureSession = device.createCaptureSession(.., imageReader.surface)
..into the Surface
from the MediaRecorder
?
val surface = MediaCodec.createPersistentInputSurface()
val recorder = MediaRecorder(context)
..
recorder.setInputSurface(surface)
I'm thinking that I might need an OpenGL pipeline here with a pass-through shader - but I don't know how I get from the ImageReader
's Image
to an OpenGL texture, so any help here would be appreciated.
What I tried: I looked into the HardwareBuffer APIs, specifically
auto clientBuffer = eglGetNativeClientBufferANDROID(hardwareBuffer);
...
auto image = eglCreateImageKHR(display,
EGL_NO_CONTEXT,
EGL_NATIVE_BUFFER_ANDROID,
clientBuffer,
attribs);
...
glEGLImageTargetTexture2DOES(GR_GL_TEXTURE_EXTERNAL, image);
And I think this might work, but it requires API Level 28. So I still need a solution for API Level 23 and above. The image.getPlanes()
function returns me three ByteBuffer
s for the YUV data, not sure how I can create an OpenGL texture from there though..