1

I'm using ARB_sparse_texture OpenGL extension in a visualisation project. I'm getting random exception on glTexturePageCommitmentEXT call. Its causing computer to reboot when i run my application in NVidia Mosaic Mode.

I'm calling glTexturePageCommitmentEXT like this :

glTexturePageCommitmentEXT( textureId, level, 0, 0, layer, width, height, 1, false );

Where textureId points to a sparse texture generated with :

const PageSizeT & pageSize  = pageSizes( internalFormat );
pageSizeX   = pageSize.x[pageSizeIndex_];
pageSizeY   = pageSize.y[pageSizeIndex_];
pageSizeZ   = pageSize.z[pageSizeIndex_];

glGenTextures( 1, &id );
glBindTexture( GL_TEXTURE_2D_ARRAY, id );

glTexParameteri( GL_TEXTURE_2D_ARRAY, GL_TEXTURE_SPARSE_ARB, GL_TRUE );
glTexParameteri( GL_TEXTURE_2D_ARRAY, GL_VIRTUAL_PAGE_SIZE_INDEX_ARB, pageSizeIndex_ );

glTexParameteri( GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR );
glTexParameteri( GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAG_FILTER, GL_LINEAR );

glTexParameteri( GL_TEXTURE_2D_ARRAY, GL_TEXTURE_BASE_LEVEL, 0 );

glTexStorage3D( GL_TEXTURE_2D_ARRAY, levels, internalFormat, width, height, layers );

handle = glGetTextureHandleARB( id );

glMakeTextureHandleResidentARB( handle );

glBindTexture( GL_TEXTURE_2D_ARRAY, 0 );

level is mipmapping level, 0 and 0 x and y offsets, layer is index of a slice in GL_TEXTURE_2D_ARRAY, width and height is texture size in mipmap level

std::max( (int)textureWidth_ >> level_, 1 )

and 1 is depth. Last parameter (false) is where i ask extension to uncommit given texture area. To commit texture specific area i have a similar call to this function with last parameter set true.

glTexturePageCommitmentEXT( textureId, level, 0, 0, layer, width, height, 1, false );
  • I have 12 screens running in NVidia Mosaic mode.
  • I have 3 NVidia Quadro M6000 installed on my computer.
  • I have a fragment shader to tell application which portion/mipmap level of this texture is visible :

    layout( std430, binding = 4 ) buffer TexUsage { uint texUsage[]; //[ layer{level, level, level..}... ] };

    atomicAdd( texUsage[layer * levelCount + int( textureQueryLod( tex, texCoord.xy ).y )], 1 );

  • Each frame when i find a mipmap level of a layer is visible i commit that with above steps and i create a Framebuffer :

    glGenFramebuffers( 1, &id );

    glBindFramebuffer( GL_FRAMEBUFFER, id ); glFramebufferTextureLayer( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, textureId, level, layer );

    // Do some drawing

    glBindFramebuffer( GL_FRAMEBUFFER, 0 );

  • Or if i found a mipmap level of a layer is not visible any more i uncommit it with the code above.

  • After i have this random crashes i tried :

    • Using smaller textures 256x256x100 => Crashed.
    • Not using sparse textures => I had no crashes.
    • Tried to isolate problem with removing mipmap levels (by setting levels to 1) => didn't helped.
    • Tried disabling Mosaic Mode and forcing 1 GPU to do the work instead of three GPUs => Some what better, at least i got an exception for glTexturePageCommitmentEXT(..., false) call. But still have random crash.

My questions :

  • How can i resolve this issue?
  • What is happening?
  • Why i don't get any glError?
  • Why it can only throw exception only when 1 GPU is active (Mosaic is disabled)?
  • How this can cause computer to reboot and how i can i prevent it?
  • I see NVidia driver has option to "Optimize Sparse Texturing" and its turned on. What is this setting doing? Any Experience?
  • Any suggestion to improve this whole thing?
  • 1
    The only explanation for the sudden computer restart is a really, really bad crash in the GPU driver, taking down the system with it. This is not supposed to happen and clearly a driver bug. There's nothing *you* can do to fix it, other than reporting it to NVidia and providing a test case program triggering the bug. Of course you can look for some workaround that avoids the crash, but this will not be by looking at documentation but will involve a lot of testing and crashing until you identified the exact circumstances that trigger the bug. – datenwolf Jul 05 '15 at 11:37
  • Keep in mind that NVidia's OpenGL drivers employ huge amounts of heuristics in their inner workings. Something you could try out immediately is disabling multithreaded driver support (multithreading in the driver settings). Multithreading in the OpenGL driver gives only very little performance gains in practice and if so then only with programs which the NVidia driver engineers did exhaustive testing with. So you can probably safely disable it without loosing any significant performance. – datenwolf Jul 05 '15 at 11:39
  • Thanks @datenwolf. Just tried disabling multithreaded driver support, still crashing. – Ali Nakipoğlu Jul 05 '15 at 14:45

0 Answers0