0

I'm currently implementing an algorithm to generate a voxelized mesh from a point cloud. As of right now I'm using random unsigned generated in a 64x64x64 grid. Then I'm running an algorithm that for every point in such grid checks for the neighbours and looks to see if they are transparent (0u) or not (1u) and applies a face to the current voxel if the neighbour voxel in that direction is transparent.

Something like this (The real implementation is in 3D).

-0-
0x1
-1-

Let's say this is the situation for a particular voxel (x) in this case the algorithm will place a face on the left and top side as the voxel in these positions are trasparent but not on the bottom and right side as if there were faces there, they would be hidden by the non transparent voxels.

As of now I'm placing a face using a function which takes in two references to two arrays, the first for the vertices (floats) and the second for the indices or elements (unsigned int) and adds the required data to them (4 vertices and 6 indices).

Since this is a very simple algorithm and I think it would work well in parallel, I was thinking of implementing it using a compute shader to generate the geometry and pass the generated vectors to the CPU, but since the generated vectors could be very big, I don't think it would be a good idea.

So, I know that to render data, you have to pass the data from the CPU to the GPU using the function:

glBufferData(GL_ARRAY_BUFFER, sizeof(float) * m_Vertices.size(), &m_Vertices[0], GL_STATIC_DRAW);

in this case I'm passing a std::vector of floats to the array buffer, and since I also think that data that is generated in a compute shader is on the GPU, I was wondering if it would be possible to skip the step of passing the generated arrays (or vectors) generated on the compute shader to the CPU and then back to the GPU to draw and directly tell OpenGL to use a particular memory address to get the data to draw.

Is something like this possible or is there some mistake in my reasoning?

Fabrizio
  • 1,138
  • 4
  • 18
  • 41
  • 1
    What I think you are saying is a longer explanation of what chunk meshing is. What you are saying you are looking for is a geometry shader, it creates vertices on the gpu, but it will be not efficient for something that is static, won't move, won't change. When you buffer your data it constructs memory on the gpu for you. What I did in my voxel game is to have a static index buffer that is passed to the vertex shader for every chunk. Keep in mind that you can always use multi-threading for doing chunk meshing. What you can also do is so called sprite batching to have less draw-calls. – vikAy Apr 01 '21 at 23:03
  • 3
    https://stackoverflow.com/questions/59686151/ – genpfault Apr 01 '21 at 23:25
  • I only started researching this, but mesh shading seems ideal for generating geometry to be rendered from an arbitrary input. Only problem is that it's an extension only available on newer GPUs. – ja2142 May 26 '23 at 20:14

0 Answers0