I'm trying to understand graphics memory usage/flow in Android and specifically with respect to encoding frames from the camera using MediaCodec
. In order to do that I'm having to understand a bunch of graphics, OpenGL, and Android terminology/concepts that are unclear to me. I've read the Android graphics architecture material, a bunch of SO questions, and a bunch of source but I'm still confused primarily because it seems that terms have different meanings in different contexts.
I've looked at CameraToMpegTest from fadden's site here. My specific question is how MediaCodec::createInputSurface()
works in conjunction with Camera::setPreviewTexture()
. It seems that an OpenGL texture is created and then this is used to create an Android SurfaceTexture
which can then be passed to setPreviewTexture()
. My specific questions:
- What does calling
setPreviewTexture()
actually do in terms of what memory buffer the frames go to from the camera? - From my understanding an OpenGL texture is a chunk of memory that is accessible by the GPU. On Android this has to be allocated using gralloc with the correct usage flags. The Android description of
SurfaceTexture
mentions that it allows you to "stream images to a given OpenGL texture": https://developer.android.com/reference/android/graphics/SurfaceTexture.html#SurfaceTexture(int). What does aSurfaceTexture
do on top of an OpenGL texture? MediaCodec::createInputSurface()
returns an AndroidSurface
. As I understand it an AndroidSurface
represents the producer side of a buffer queue so it may be multiple buffers. The API reference mentions that "the Surface must be rendered with a hardware-accelerated API, such as OpenGL ES". How do the frames captured by the camera get from theSurfaceTexture
to thisSurface
that is input to the encoder? I see CameraToMpegTest creates anEGLSurface
using thisSurface
somehow but not knowing much about EGL I don't get this part.- Can someone clarify the usage of "render"? I see things such as "render to a surface", "render to the screen" among other usages that seem to maybe mean different things.
Edit: Follow-up to mstorsjo's responses:
- I dug into the code for
SurfaceTexture
andCameraClient::setPreviewTarget()
inCameraService
some more to try and understand the inner workings ofCamera::setPreviewTexture()
better and have some more questions. To my original question of understanding the memory allocation it seems likeSurfaceTexture
creates aBufferQueue
andCameraService
passes the associatedIGraphicBufferProducer
to the platform camera HAL implementation. The camera HAL can then set the gralloc usage flags appropriately (e.g.GRALLOC_USAGE_SW_READ_RARELY | GRALLOC_USAGE_SW_WRITE_NEVER | GRALLOC_USAGE_HW_TEXTURE
) and also dequeue buffers from thisBufferQueue
. So the buffers that the camera captures frames into are gralloc allocated buffers with some special usage flags likeGRALLOC_USAGE_HW_TEXTURE
. I work on ARM platforms with unified memory architectures so the GPU and CPU can access the same memory so what kind of impact would theGRALLOC_USAGE_HW_TEXTURE
flag have on how the buffer is allocated? - The OpenGL (ES) part of
SurfaceTexture
seems to mainly be implemented as part ofGLConsumer
and the magic seems to be inupdateTexImage()
. Are there additional buffers being allocated for the OpenGL (ES) texture or is the same gralloc buffer that was filled by the camera able to be used? Is there some memory copying that has to happen here to get the camera pixel data from the gralloc buffer into the OpenGL (ES) texture? I guess I don't understand what callingupdateTexImage()
does.