-1

I have been using one OpenGL-context in one thread (very simplyfied) like this:

int main
{
    // Initialize OpenGL (GLFW / GLEW)
    Compile_Shaders();
    while (glfwWindowShouldClose(WindowHandle) == 0)
    {
        glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
        glfwPollEvents();
        
        Calculate_Something(); // Compute Shader
        glMemoryBarrier(GL_SHADER_STORAGE_BARRIER_BIT);
        GLfloat* mapped = (GLfloat*)(glMapNamedBuffer(bufferResult, GL_READ_ONLY));
        memcpy(Result, mapped, sizeof(GLfloat) * ResX * ResY);
        glUnmapBuffer(GL_SHADER_STORAGE_BUFFER);

        Render(Result);
        ImGui_Stuff();

        glfwSwapBuffers(WindowHandle);
    }
}

This works well until the calculations of the compute shader take longer. Then it stalls the main-loop. I have been trying to use glFenceSync but glfwSwapBuffers always has to wait until the compute shader is done.

Now I tried another approach: Generating a seperate OpenGL-context in another thread for the compute shader like this:

void ComputeThreadFunc()
{
    glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
    glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 5);
    glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);

    GLFWwindow* WindowHandleCompute = glfwCreateWindow(50, 5, "Something", NULL, NULL);
    if (WindowHandle == NULL)
    {
        std::cout << "Failed to open GLFW window." << std::endl;
        return;
    }

    GLuint Framebuffer;
    glfwMakeContextCurrent(WindowHandle);
    glGenFramebuffers(1, &Framebuffer);
    glBindFramebuffer(GL_FRAMEBUFFER, Framebuffer);

    // Compile compute shader

    while (true)
    {
        Calculate_Something();
        glMemoryBarrier(GL_SHADER_STORAGE_BARRIER_BIT);
        GLfloat* mapped = (GLfloat*)(glMapNamedBuffer(bufferResult, GL_READ_ONLY));
        memcpy(Result, mapped, sizeof(GLfloat) * ResX * ResY);
        glUnmapBuffer(GL_SHADER_STORAGE_BUFFER);
        
        Sleep(100); // Tried different values here to make sure the GPU isn't too saturated
    }
}

I changed the main-function to:

int main
{
    // Initialize OpenGL (GLFW / GLEW)
    std::thread ComputeThread = std::thread(&ComputeThreadFunc);
    while (glfwWindowShouldClose(WindowHandle) == 0)
    {
        glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
        glfwPollEvents();

        Render(Result);
        ImGui_Stuff();

        glfwSwapBuffers(WindowHandle);
    }
}

Now what I see always seems to switch between two images (maybe the first two after startup). I think the compute-shader/-thread gives correct results (can't really check, because the main-loop doesn't display it).

What am I missing here? The two threads don't use shared ressources/buffers (that I know of). I generated a seperate framebuffer for the compute-thread. Do I have to generate additional buffers (all the buffers the compute-shader needs are of course generated) or synchronize somehow (the result is stored in a C++-array, so the OpenGL-buffers can be completely seperate)?

Should this approach work in general? And if so, are there general considerations that I did not take into account? If additional code is needed, please let me know.

Edit:

So, I just played around with Sleep(5000) to see when exactly the above error occurs. When I place this call before glMapNamedBuffer the main window seems to work for 5 seconds. Placed after this call it immediatly breaks. Is there anything special about this call I have to consider with multiple OpenGL-contexts?

genpfault
  • 51,148
  • 11
  • 85
  • 139
Paul Aner
  • 361
  • 1
  • 8
  • Could you clearify what the main method does? Does it still create it's own window? Is `WindowHandle` in the main the same as `WindowHandle` in the thread function? Also note, that `glfwCreateWindow` needs to be called from the main thread only. See the ["Thread safety" section in the docs](https://www.glfw.org/docs/3.3/group__window.html#ga3555a418df92ad53f917597fe2f64aeb) – BDL Mar 06 '23 at 12:30
  • Edited my post. The main-thread/-function creates one window, the compute-thread a seperate one (which I will later, if it works, set to invisible). Calling `glfwCreateWindow` in the second thread seems to work (it is showing)... – Paul Aner Mar 06 '23 at 12:37
  • did the compute thread forget to unmap the buffer? – user253751 Mar 06 '23 at 12:40
  • No, it did not. I forgot to write the line above ;) – Paul Aner Mar 06 '23 at 12:42
  • @PaulAner: Just because the window is created doesn't mean that it works. There are eventloops and mapping tables behind those windows, so screwing them up by calling the method from a different thread might very well cause problems lateron. It's never a good idea to go against a restriction stated by a library. – BDL Mar 06 '23 at 12:44
  • I changed the call to `glfwCreateWindow` so it's in the main-function now. Does not make a difference... – Paul Aner Mar 06 '23 at 12:50

2 Answers2

0

Window creation with GLFW is only possible in the main thread as stated in the "Thread Safety" section of the GLFW docs

Some other methods like glfwMakeContextCurrent may also be called from secondary threads, so what you have to do is to create all windows from the main thread, but then use one of the windows in the calculation thread.

Basic structure:

int main()
{
  //Create Window 1
  auto window1 = ...  

  //Create Window 2
  auto window2 = ...

  //Start thread
  std::thread ComputeThread = std::thread(&ComputeThreadFunc, window2);

  //Render onto window1
  glfwMakeContextCurrent(window1);

  while (glfwWindowShouldClose(window1) == 0)
  {
      glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
      glfwPollEvents();

      Render(Result);
      ImGui_Stuff();

      glfwSwapBuffers(window1);
  }
}
void ComputeThreadFunc(GLFWWindow* window2)
{
    GLuint Framebuffer;
    glfwMakeContextCurrent(window2);
    glGenFramebuffers(1, &Framebuffer);
    glBindFramebuffer(GL_FRAMEBUFFER, Framebuffer);

    // Compile compute shader

    while (true)
    {
        Calculate_Something();
        glMemoryBarrier(GL_SHADER_STORAGE_BARRIER_BIT);
        GLfloat* mapped = (GLfloat*)(glMapNamedBuffer(bufferResult, GL_READ_ONLY));
        memcpy(Result, mapped, sizeof(GLfloat) * ResX * ResY);
        Sleep(100); // Tried different values here to make sure the GPU isn't too saturated
    }
}

Also note, that currently the buffer mapping is never unmapped, so you should probably call glUnmapNamedBuffer after the memcpy line. Or you may use persistently mapped buffers.

BDL
  • 21,052
  • 22
  • 49
  • 55
  • I already changed the code. This seems to make no difference at all (before I saw both windows, so it seemed to work). The call `glUnmapBuffer` is in the code, I just forgot it above... – Paul Aner Mar 06 '23 at 12:50
  • Did you read this manual? https://www.khronos.org/opengl/wiki/OpenGL_and_multithreading – KungPhoo Mar 06 '23 at 14:49
  • @KungPhoo: Could you explain what you are referring to? You are correct that the OpenGL context may be created in any thread, but (unfortunately) GLFW needs certain functions to be called from the main thread, mostly those that create or manage a window itself (glfwCreateWindow, glfwWindowHint, ...). – BDL Mar 06 '23 at 14:53
  • My hope was, that the example code might help. I think, wglMakeCurrent or glfwMakeContextCurrent is the key to the problem. Can you protocol to stdout, what calls are made? Maybe one thread "steals" the context "focus" that was set by the other thread? – KungPhoo Mar 06 '23 at 20:51
  • Well, I am pretty sure, GLFW is not the problem. I called ˋglfwCreateWindowˋ in the main thread, called ˋglfwMakeContextCurrentˋ in the second thread, there is no context-switching at all and the compute shader in the second thread works. It is just that any call to ´glMapNamedBuffer´ breaks the rendering in the main thread… – Paul Aner Mar 07 '23 at 03:58
  • @PaulAner: Are you by any chance accessing the `Result` pointer in the main thread? Are you synchronizing the memcpy action with the access from the main thread? Also: Your pointer unmapping looks fishy. If you map with `glMapNamedBuffer` you should unmap with `glUnmapNamedBuffer`. And I guess we need to see ALL the relevant code, currently we are guessing around, because too much code is missing. Is `bufferResult` even bound to the `GL_SHADER_STORAGE_BUFFER` binding point? – BDL Mar 07 '23 at 10:54
  • I do a memcpy in the compute thread and access the C++-array from the main thread. I am right now testing to do the `glMapNamedBuffer` in the main thread. That seems to work. I don't get any data yet, but there surely is some mistake somewhere. At least this does not break the main thread (so far). I will report back if this works. – Paul Aner Mar 07 '23 at 11:01
0

OK, I finally got this to work.

As mentioned in the edit above, I could trace the problematic call to glMapNamedBuffer (also glGetNamedBufferSubData produced the same error) in the compute thread. Without those calls the main thread worked fine, but of course with the undesired side effect that I did not get the results from the compute shader.

I now placed this call in the main thread. For that to work, one must first unbind the buffer with a call to glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0) in the compute thread and then bind it in the main thread. This has to be done after the compute shader is done, so I put glMemoryBarrier before the unbind call - which did not work. Only after I put glFinish here, it worked.

Two questions remain and if anybody could give me an answer, it would be greatly appreciated:

Why does a call to glMapNamedBuffer in the compute thread break the main thread?

Why does glfinish work while glMemoryBarrier does not? Both should wait, until the compute shader is done, shouldn't they?

Paul Aner
  • 361
  • 1
  • 8