0

For every incoming tweet, I was asked to put a point (corresponding to tweet location) on world map (opengl scene). Some thing like this. I tried to learn OpenGL. I cried. With time, OpenGL started feeling like my friend. He was on the way to become my best friend. Now, I used GLSurfaceView on Android and assumed like I can simply draw points one by one, one after another with out ever calling glCear(). This way I only have to keep current tweet coordinate in memory. Told to myself, you are a genius. I suffered like others, whose have explained there pains here, here and here. Now comes a master, who taught me the basics and told me, kiddo, you have to call glClear no matter what.

I cannot keep adding/storing coordinates of all points in memory and call glDrawArrarys since this has to keep running for ever on mobile device. So the choices now are

  1. Draw n points. After that, try some thing like 'render to texture' and use that texture as background for the next drawing next n points.
  2. Use EGL_BUFFER_PRESERVED as in EGL Preserve sample at Mali sdk . Needs NDK as Android EGL14 cannot be used before Android API 17. Khronos EGL11 is not yet implemented in Android. People closest to my new friend tells, he doesn't like this at all.

Do I have any other choices or am I solving the wrong problem ? I would like to know what is OpenGL's approach to updating a scene for ever.

Community
  • 1
  • 1
kiranpradeep
  • 10,859
  • 4
  • 50
  • 82

2 Answers2

2

I believe EGL_BUFFER_PRESERVED is not supported on all devices. So even if you could use API level 17, that might not be your solution.

You could do the primary rendering to a texture, using a FBO. You would then clear that texture once while you initialize the whole thing, after you finished setting up the FBO. Then, every time you get a new point, you draw it to the texture, and then copy the texture to the primary framebuffer.

Roughly, you have the following steps on startup:

  1. Create texture, and set it up with the desired format/size, parameters, etc.
  2. Create FBO, and set the texture as color attachment.
  3. Prepare rendering to FBO, calling glViewport(), etc.
  4. Call glClear().

Then each time a point is added:

  1. Pass the new point into your Renderer implementation, and trigger a redraw.
  2. In onDrawFrame(), bind FBO for rendering.
  3. Render the point.
  4. Bind default framebuffer for rendering.
  5. Bind shader program for simple texturing.
  6. Bind texture for sampling.
  7. Draw screen filling quad.
Reto Koradi
  • 53,228
  • 8
  • 93
  • 133
  • Thanks for the detailed list and EGL_BUFFER_PRESERVED part. Will TextureView class help in steps 4 to 7 ? I mean to skip the second shader part. – kiranpradeep Aug 06 '14 at 12:00
  • I haven't used `TextureView` myself. From the documentation, it sounds like it could be used that way, so it certainly seems worth trying. – Reto Koradi Aug 06 '14 at 14:40
  • TextureView vs. SurfaceView vs. GLSurfaceView shouldn't matter -- they're all just providing a destination for EGL. – fadden Aug 06 '14 at 14:48
  • @fadden: The way I understand `TextureView`, wouldn't it save one copy of the data? With my proposed solution, you render to the texture, and then copy it to the surface created by `GLSurfaceView`. It seems like with `TextureView`, you could render directly to the view surface? – Reto Koradi Aug 06 '14 at 15:01
  • A TextureView is a SurfaceTexture (a/k/a GL consumer) plus some rendering code. Rendering to a SurfaceTexture doesn't really help, because you're still going to see double-buffered behavior when you call eglSwapBuffers() -- and if you don't call it, your SurfaceTexture texture won't be updated. What the OP really wants as an FBO, as you described. (cf. https://source.android.com/devices/graphics/architecture.html) – fadden Aug 07 '14 at 13:57
1

Now comes a master, who taught me the basics and told me, kiddo, you have to call glClear no matter what.

It's important to know, when to bend, or even break this rule. Or how to overcome it. The main problem, the reason why you should start each frame from a clean slate, is the possibility of main framebuffer corruption. The main framebuffer does not really "belong" to OpenGL, it's just lent to it from the operating system. And the OS can do to it, whatever it wants.

Now there's an easy way around this: Draw into a framebuffer, backed by an image store you have full control over. In OpenGL(-ES) that would be a framebuffer object.

datenwolf
  • 159,371
  • 13
  • 185
  • 298
  • Thanks. Didn't knew about framebuffer NOT belonging to OpenGL. – kiranpradeep Aug 06 '14 at 11:56
  • 1
    @Kiran: Specifically the main framebuffer, which you bind with `glBindFramebuffer(0)`. This special framebuffer is managed by the operating system's graphics system (X11, GDI, EGL, etc.). This goes so far that you can use the graphics system's drawing methods to draw onto that framebuffer as well (doesn't work reliably for double buffered pixelformats and clashes hilariously with the depth buffer). But it can be done. BTDT. – datenwolf Aug 06 '14 at 12:09