0

My graphics card is GTX 1080 ti. I want to use OpenGL 3D texture. The pixel (voxel) format is GL_R32F. OpenGL did not report any errors when I initialized the texture and rendered with the texture.

When the 3D texture was small (512x512x512), my program ran fast (~500FPS).

However, if I increased the size to 1024x1024x1024 (4GB), the FPS dramatically dropped to less than 1FPS. When I monitored the GPU memory usage, the GPU memory does not exceed 3GB even though the texture size is 4GB and I have 11G in total.

If I changed pixel format to GL_R16F, it worked again and the FPS went back to 500FPS and the GPU memory consumption is about 6.2GB.

My hypothesis is that the 4GB 3D texture is not really on the GPU but on the CPU memory instead. In every frame, the driver is passing this data from CPU memory to GPU memory again and again. As a result, it slows down the performance.

My first question is whether my hypothesis is correct? If it is, why it happens even I have plenty of GPU memory? How do I enforce any OpenGL data to reside on GPU memory?

user3677630
  • 635
  • 6
  • 14
  • I believe your hypothesis is correct and seems logical. It might be that it is not optimised. – M2T156 Oct 31 '18 at 09:43
  • why dont you use sparse textures? – Paritosh Kulkarni Oct 31 '18 at 13:02
  • @ParitoshKulkarni If the texture is not sparse, using sparse texture is not efficient. Still, using sparse textures to store a full 3D texture is possible and maybe it would bypass my problem with the same size of data. I will give a try. – user3677630 Nov 01 '18 at 01:27

1 Answers1

-1

My first question is whether my hypothesis is correct?

It is not unplausible, at least.

If it is, why it happens even I have plenty of GPU memory?

That's something for your OpenGL implementation to decide. Note that this also might be some driver bug. It might also be some internal limit.

How do I enforce any OpenGL data to reside on GPU memory?

You can't. OpenGL does not have a concept of Video RAM or System RAM or even a GPU. You specify your buffers and textures and other objects and make the draw calls, and it is the GL implementation's job to map this to the actual hardware. However, there are no performance guarantees whatsoever - you might encounter a slow path or even a fallback to software rendering when you do certain things (with the latter being really uncommon in recent times, but conceptually, it is very possible).

If you want control over where to place data, when to actually transfer it, and so on, you have to use a more low-level API like Vulkan.

derhass
  • 43,833
  • 2
  • 57
  • 78