2

I managed to write a demo displaying a 3D model on TextureView and the model can move according to the sensors of the phone. The 3D engine is wrote by C++ and what I need to do is giving the SurfaceTexture of TextureView to the 3D engine. The engine calls the function ANativeWindow_fromSurface to retrieve a native window and draw 3D model on it. 3D engine is not the key point I want to talk about in this question.

Now I want to record the moving 3d model to a video. One way is using GL_TEXTURE_EXTERNAL_OES texture just like grafika, make 3D engine draw frames to the oes texture and draw the texture content to screen after every call of updateTexImage().But for some restrictions, I am not allowed to use this way.

I plan to use the SurfaceTexture of TextureView directly. I think functions such as attachToGLContext() and detachFromGLContext() will be useful for my work.

Could anyone give me some advices?

dragonfly
  • 1,151
  • 14
  • 35
  • @fadden Could give me some help? – dragonfly Feb 28 '17 at 12:23
  • The attach/detach calls change the EGL context that the SurfaceTexture output is available in. It's generally easier to use a single EGL context for the screen and the video encoder (like e.g. "continuous capture"). The "record GL app" code shows three different ways to render for screen and video; do these not work? – fadden Feb 28 '17 at 18:12
  • @fadden In "record GL app", shapes are produces by your java code, you can control them draw to screen and encoder. But in my case, shapes are produced by 3D engine, I am not allowed to control the shapes directly. I can only do it on java layer and the frame is wrote into surfacetexture of TextureView already. – dragonfly Mar 01 '17 at 08:02

1 Answers1

1

Grafika's "record GL app" has three different modes of operation:

  1. Draw everything twice.
  2. Render to an offscreen pbuffer, then blit that twice.
  3. Draw once, then copy between framebuffers (requires GLES 3).

If you can configure the EGL surface that is rendered to, approaches 2 and 3 will work. For approach #3, bear in mind that the pixels don't go to the Surface (that's the Android Surface, not the EGL surface) until you call eglSwapBuffers().

If the engine code is managing the EGL surface and calling eglSwapBuffers() for you, then things are a bit more annoying. The SurfaceTexture attach/detach calls will let you access the GLES texture with the output from a different EGL context, but the render thread needs that when rendering the View UI. I'm not entirely sure how that's going to work out.

fadden
  • 51,356
  • 5
  • 116
  • 166
  • I think approach 2 will be a good choice. But for me, a better way may be like this: I create an oes SurfaceTexture and give it to 3d engine(I have to do this which disobeys my original plan), give the surfaceTexture to 3d engine. Fortunately, I can modify the platform code of the 3d engine. I am sure have to modify it and make the native window an off screen one. Could you have a look at the platform code and give me some advice? – dragonfly Mar 02 '17 at 02:55
  • The process is as follow: step 1. create a window with ANativeWindow_fromSurface code: http://www.paste.org/83858 step 2. use the window to create an EglSurface code:http://www.paste.org/83856 step 3. enter the message loop and swapbuffer in loop code: http://www.paste.org/83859 – dragonfly Mar 02 '17 at 03:36
  • I think the eglSurface may not match with the oes texture. Maybe I should change it to eglCreatePbufferSurface. Please give some some help,thanks! – dragonfly Mar 02 '17 at 03:45
  • My engine platform code is modified from this:https://github.com/gameplay3d/GamePlay/blob/master/gameplay/src/PlatformAndroid.cpp The difference is that my window used for eglCreateWindowSurface() is passed from java layer(surfaceTexture) but window in the github code is from struct android_app which is meaningful when we use ndk native activity. – dragonfly Mar 02 '17 at 06:34
  • I use oes SurfaceTexture and give it to 3d engine, I works fine!!! I need not to modify the platform code of C++. haha. But there is a problem of multithread of surfaceTexture (one texture in each render thread)because I have to merge camera frame and 3d model frame then encode the merged frame. – dragonfly Mar 02 '17 at 13:39