0

I'm using OpenGL and SDL to scale a software-rendered framebuffer of a fixed size (320x240) to the client window size while maintaining the ratio by putting black bars on each side if necessary.

When I use single buffering, the rendering is smooth when I resize the window. However enabling double buffering causes a lot of flickering, and the scaling gets a bit laggy to follow the changed window size.

In practice, I'd prefer double buffering because changing the window size doesn't happen all the time, and otherwise double buffering provides smoother animation, but I'd still like to know the cause of the lagginess and flickering when resizing the window with double buffering.

The code is stripped as much as possible to the relevant part of this question.


This seems to be a problem of SDL, or at least of its Linux backend. The same program with SDL swapped to freeglut handles double buffering smoothly when resizing the window (pastebin).


SDL wasn't wrong here. See the comments section.

#include <stdio.h>
#include <stdint.h>
#include <assert.h>
#include <math.h>
#include <SDL2/SDL.h>
#include <GL/gl.h>

#define WindowW 800
#define WindowH 600
#define FbW 320
#define FbH 240
#define FbTexW 0x200
#define FbTexH 0x100
static const float VertexCoord[] = {0, 0, FbTexW, 0, 0, FbTexH, FbTexW, FbTexH};

static void initGl() {
  glEnable(GL_TEXTURE_2D);
  glEnableClientState(GL_VERTEX_ARRAY);
  glClearColor(0, 0, 0, 0);
  glShadeModel(GL_FLAT);
  glOrtho(0, FbW, FbH, 0, 1, -1);
  glVertexPointer(2, GL_FLOAT, 0, VertexCoord);
}

static void resize(int w, int h) {
  int p = w * FbH;
  int q = h * FbW;
  if (p > q) {
    float w_ = (float)q / FbH;
    glViewport((int)roundf((w - w_) * 0.5f), 0, (int)roundf(w_), h);
  } else {
    float h_ = (float)p / FbW;
    glViewport(0, (int)roundf((h - h_) * 0.5f), w, (int)roundf(h_));
  }
}

static void draw() {
  glClear(GL_COLOR_BUFFER_BIT);
  glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}

int main(int argc, char **argv) {
  (void)argc, (void)argv;
  assert(!SDL_Init(SDL_INIT_VIDEO));
  SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1); // 1 -> double, 0 -> single
  SDL_Window *win = SDL_CreateWindow("test", SDL_WINDOWPOS_UNDEFINED,
    SDL_WINDOWPOS_UNDEFINED, WindowW, WindowH,
    SDL_WINDOW_RESIZABLE | SDL_WINDOW_OPENGL);
  assert(win);
  SDL_GLContext gl = SDL_GL_CreateContext(win);
  assert(gl);
  initGl();
  for (;;) {
    SDL_Event event;
    assert(SDL_WaitEvent(&event));
    switch (event.type) {
    case SDL_QUIT:
      SDL_GL_DeleteContext(gl);
      SDL_DestroyWindow(win);
      SDL_Quit();
      return 0;
    case SDL_WINDOWEVENT:
      if (event.window.event != SDL_WINDOWEVENT_SIZE_CHANGED) {
        int w, h;
        SDL_GL_GetDrawableSize(win, &w, &h);
        resize(w, h);
      }
    }
    draw();
    SDL_GL_SwapWindow(win);
  }
}
xiver77
  • 2,162
  • 1
  • 2
  • 12
  • 2
    You draw once for *every* event you get. Even for mouse movement - you get a lot of events, and with double buffering you block to vsync after each processed event, those events gets queued, the faster you get input the more is the distance between your 'processed' and 'last queued' events. When you resize window - you get multiple events for each resize tick (mose movement, resize, expose, maybe something else). You need to process entire event queue and only when queue is drained you render once. – keltar Jul 05 '22 at 15:59
  • @keltar "with double buffering you block to vsync after each processed event", yeah I also noticed that. It wasn't possible to do immediate updates with double buffering, and I just thought it's some feature of the graphics driver of something on top of it. I fixed the event loop of the SDL version as you mentioned, and it's very smooth! – xiver77 Jul 05 '22 at 16:12
  • @keltar Is it possible to do double buffering without vsync? I did some googling, and couldn't find something related up to now. A lot of pages seem to assume vsync with double buffering. – xiver77 Jul 05 '22 at 16:21
  • 2
    You could use `SDL_GL_SetSwapInterval`. Why would you want to disable vsync? Flushing event queue before each render *is* the correct way regardless. – keltar Jul 05 '22 at 16:25
  • `SDL_GL_SetSwapInterval` with `0` (immediate update) fails with double buffering on my computer. I'm just curious why it doesn't work on my computer and why most information about double buffering assumes vsync. – xiver77 Jul 05 '22 at 16:28
  • 1
    You mean it returns an error? What SDL_GetError have to say about this? What OS do you use? Double buffering is not exclusive to vsync, but on PC things are not as straightforward anymore, as most graphics systems use some kind of compositor for desktop effects, and that thing often have vsync/buffering of its own. – keltar Jul 05 '22 at 16:37
  • @keltar The problem was I called `SDL_GL_SetSwapInterval` right after `SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1)` before creating the GL context.. By calling it after the creation, yes, immediate updates with double buffering is possible. Thanks for letting me notice several errors. – xiver77 Jul 05 '22 at 16:45

0 Answers0