3

Hello world and thanks for taking some time to read this !

I am writing a program in GTK2/3 + OpenGL, I got two versions of the program running:

  • (a) GTK+2 + GtkGlext Extention -> works great !
  • (b) GTK+3 + LibX11 -> works just fine !

Everything looks fine, except that the rendering in (a) is significantly faster that the rendering in (b) ... and I got no clue why. Here are some example of the code parts used to create the OpenGL context:

  • (a)

    // To create the context, and the associated GtkWidget 
    
    GdkGLConfig * glconfig = gdk_gl_config_new_by_mode (GDK_GL_MODE_RGBA | GDK_GL_MODE_DEPTH | GDK_GL_MODE_DOUBLE);
    GtkWidget * drawing_area = gtk_drawing_area_new ();
    gtk_widget_set_gl_capability (drawing_area, glconfig, NULL, TRUE, GDK_GL_RGBA_TYPE);
    g_signal_connect (G_OBJECT (drawing_area), "expose-event", G_CALLBACK (on_expose), data);
    
    // And later on to draw using the OpenGL context: 
    
    gboolean on_expose (GtkWidget * widg, GdkEvent * event, gpointer data)
    {
      GdkGLContext * glcontext  = gtk_widget_get_gl_context (widg);
      GdkGLDrawable * gldrawable = gtk_widget_get_gl_drawable (widg);
      if (gdk_gl_drawable_gl_begin (gldrawable, glcontext))
      {
        // OpenGL instructions to draw here !
        gdk_gl_drawable_swap_buffers (view -> gldrawable);
        gdk_gl_drawable_gl_end (view -> gldrawable);
      }
      return TRUE;
    }
    
  • (b)

    // To create the GtkWidget 
    
     GtkWidget * drawing_area = gtk_drawing_area_new ();
     // Next line is required to avoid background flickering
     gtk_widget_set_double_buffered (drawing_area, FALSE);
     g_signal_connect (G_OBJECT (drawing_area), "realize", G_CALLBACK(on_realize), data);
     g_signal_connect (G_OBJECT (drawing_area), "draw", G_CALLBACK(on_expose), data);
    
    // To create the OpenGL context
    
    GLXContext glcontext;
    
    G_MODULE_EXPORT void on_realize (GtkWidget * widg, gpointer data)
    {
      GdkWindow * xwin = gtk_widget_get_window (widg);
      GLint attr_list[] = {GLX_DOUBLEBUFFER,
                           GLX_RGBA,
                           GLX_DEPTH_SIZE, 16,
                           GLX_RED_SIZE,   8,
                           GLX_GREEN_SIZE, 8,
                           GLX_BLUE_SIZE,  8,
                           None};
       XVisualInfo * visualinfo = glXChooseVisual (GDK_WINDOW_XDISPLAY (xwin), gdk_screen_get_number (gdk_window_get_screen (xwin)), attr_list);
       glcontext = glXCreateContext (GDK_WINDOW_XDISPLAY (xwin), visualinfo, NULL, TRUE);
       xfree (visualinfo);
    }
    
    // To Draw using the OpenGL context
    
    G_MODULE_EXPORT gboolean on_expose (GtkWidget * widg, cairo_t * cr, gpointer data)
    {
      GdkWindow * win = gtk_widget_get_window (widg);
      if (glXMakeCurrent (GDK_WINDOW_XDISPLAY (xwin), GDK_WINDOW_XID (xwin), glcontext))
      {
         // OpenGL instructions to draw here !
         glXSwapBuffers (GDK_WINDOW_XDISPLAY (win), GDK_WINDOW_XID (win));
       }
       return TRUE;
    }
    

Trying to understand why (a) was faster than (b) I downloaded the sources of the GtkGLext library, read them, and find out that the commands were exactly the same with a call to X11. Now my thoughts are either the following line in (b)

gtk_widget_set_double_buffered (drawing_area, FALSE);

Is messing with the rendering, and then there is nothing I can do ... or there is/are difference(s) between the OpenGL contexts that might explain the behavior I noticed, If I follow up in this direction I need to compare both contexts with as many detail as possible ... so far I picked what seems to be the most usual way to get some information:

OpenGL Version                  : 3.0 Mesa 12.0.3
OpenGL Vendor                   : nouveau
OpenGL Renderer                 : Gallium 0.4 on NVCF
OpenGL Shading Version          : 1.30

Color Bits (R,G,B,A)            : 8, 8, 8, 0
Depth Bits                      : 24
Stencil Bits                    : 0
Max. Lights Allowed             : 8
Max. Texture Size               : 16384
Max. Clipping Planes            : 8
Max. Modelview Matrix Stacks    : 32
Max. Projection Matrix Stacks   : 32
Max. Attribute Stacks           : 16
Max. Texture Stacks             : 10

Total number of OpenGL Extensions   : 227
Extensions list:
     N°1    :   GL_AMD_conservative_depth
     N°2    :   GL_AMD_draw_buffers_blend
 ...

But both contexts give back exactly the same information ...

Thanks for getting there already ... now my question is:

Is there a way to output as many information as possible about an OpenGL context, and how ?

I welcome any other suggestion(s) on what I am doing !

S.

PS: I am working on using the GtkGLArea Widget for GTK3, but as stated here I am not there yet.

[EDIT] Some of the OpenGL instructions:

// OpenGL instructions to draw here !

glLoadIdentity (); 
glPushMatrix ();
// d is the depth ... calculated somewhere else
glTranslated (0.0, 0.0, -d); 
// Skipping the rotation part for clarity, I am using a quaternion
rotate_camera (); 
// r, g, b and a are GLFloat values
glClearColor (r,g,b,a); 
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT); 
glDisable (GL_LIGHTING);
int i;
// nbds is the number of chemical bonds 
GLfloat * lineVertices;
// This is "roughly" what I do to draw chemical bonds, to give you an idea
for (i=0; i<nbds;i++)
{
   // get_bonds (i) gives backs a 6 float array
   lineVertices = get_bonds(i);
   glPushMatrix(); 
   glLineWidth (1.0); 
   glEnableClientState (GL_VERTEX_ARRAY); 
   glVertexPointer (3, GL_FLOAT, 0, lineVertices); 
   glDrawArrays (GL_LINES, 0, 2); 
   glDisableClientState (GL_VERTEX_ARRAY); 
   glPopMatrix();
}
glEnable (GL_LIGHTING);

[/EDIT]

  • As far as I understand it, double buffering on the GTK end is merely to render everything that influences a GTK widget's buffer off-screen and then presenting it afterwards. I don't see why that would interfere with GLX in any way or why it should impose a severe penalty. Are you sure you obtain a *direct* rendering context in the second case? Can you please run the first application with `LIBGL_ALWAYS_INDIRECT=1` and see if the performance is similarly bad then? – thokra Sep 30 '16 at 11:22
  • Can you post some of the actual OpenGL code (what you replaced with `// OpenGL instructions to draw here !`)? – andlabs Sep 30 '16 at 11:48
  • Run your application into [apitrace](https://github.com/apitrace/apitrace) and find the differences :-) – peppe Sep 30 '16 at 12:18
  • Hello people, first I have to apologize not to answer sooner, I left for the week-end after writing down my question :-P – Sébastien Le Roux Oct 02 '16 at 18:50
  • Then to answer your question in the order they appear, @thokra, no way to run my program, either of the two versions, using the LIBGL_ALWAYS_INDIRECT=1 option ... I receive this kind of message 'received an X Window System error': with a) `The error was 'BadValue (integer parameter out of range for operation)'. (Details: serial 12479 error_code 2 request_code 154 (GLX) minor_code 3)` and with b): `The error was 'GLXBadContext'. (Details: serial 8563 error_code 170 request_code 154 minor_code 6)` – Sébastien Le Roux Oct 02 '16 at 19:01
  • @andlabs no way to do this, the code is way too big, I can provide some piece of it if you can be more specific. – Sébastien Le Roux Oct 02 '16 at 19:04
  • @peppe will check this, I did not know about it. – Sébastien Le Roux Oct 02 '16 at 19:05
  • @SébastienLeRoux the code to draw one primitive will suffice. Or are you using shaders and vertex arrays? – andlabs Oct 03 '16 at 00:21
  • @SébastienLeRoux: If you want to amend the question, then simply edit the question. Don't put it in the comment section. BTW, ignore my remark regarding indirect contexts - I was wrong there. – thokra Oct 03 '16 at 10:00
  • @andlabs, sorry but I am new to Stackoverflow, it is hard to make it easy to read: `glLoadIdentity (); glPushMatrix (); // GL push 0 glTranslated (0.0, 0.0, -d); rotate_camera (); glClearColor (r,g,b,a); glClear (GL_COLOR_B | GL_DEPTH_B | GL_STENCIL_B); for (i=0; i – Sébastien Le Roux Oct 03 '16 at 10:03
  • @thokra : thank you ! – Sébastien Le Roux Oct 03 '16 at 12:10
  • @andlabs, as suggested by thokra I edited the question to simplify the reading. – Sébastien Le Roux Oct 03 '16 at 12:10
  • @SébastienLeRoux could you also post that in the other question? GtkGLArea is very picky about what features of OpenGL you use, which is why I ask. – andlabs Oct 03 '16 at 12:50
  • @andlabs, yes I will thanks. I read about that, and tried for tests to use rather simple instructions ... not that I will be able to use advance ones anyway :-) – Sébastien Le Roux Oct 03 '16 at 14:01

1 Answers1

2

thanks for your suggestions, the "ApiTrace" idea was amazing, not only did I discovered a great tool, but it helped me to get some clues about my problem. Using ApiTrace:

  1. I checked that both versions (a) and (b) of my program were using exactly the same OpenGL contexts ... with great details and ease I must add ... and therefore that the error was not coming from the context initialization.
  2. I found out that in version (b) the rendering was done 5 times for more often than in version (a) ... meaning 5 times for the same frame !

The only logical conclusion I am aiming at is the difference in GTK+ signals between version 2 and 3, in version (a) of my program I use an expose-event while in version (b) I use a draw event (new signal for the GtkDrawingArea) ... obviously there are some differences in the behavior of the GTK+ library between version 2 and 3 at this point ... I am working on finding a way around it ... I will edit this answer to provide further information.

[EDIT]Hello world, answering my own question, hopefully to help someone to avoid the same mistake I did. To re-draw my OpenGL window I was using:

void update (GtkWidget * plot)
{
  gtk_widget_hide (plot);
  gtw_widget_show (plot);
}

Instead I should have been using:

gtk_widget_queue_draw (plot);

All problems solved ![/EDIT]