I'm currently implementing an algorithm to generate a voxelized mesh from a point cloud. As of right now I'm using random unsigned generated in a 64x64x64 grid. Then I'm running an algorithm that for every point in such grid checks for the neighbours and looks to see if they are transparent (0u
) or not (1u
) and applies a face to the current voxel if the neighbour voxel in that direction is transparent.
Something like this (The real implementation is in 3D).
-0-
0x1
-1-
Let's say this is the situation for a particular voxel (x) in this case the algorithm will place a face on the left and top side as the voxel in these positions are trasparent but not on the bottom and right side as if there were faces there, they would be hidden by the non transparent voxels.
As of now I'm placing a face using a function which takes in two references to two arrays, the first for the vertices (floats) and the second for the indices or elements (unsigned int) and adds the required data to them (4 vertices and 6 indices).
Since this is a very simple algorithm and I think it would work well in parallel, I was thinking of implementing it using a compute shader to generate the geometry and pass the generated vectors to the CPU, but since the generated vectors could be very big, I don't think it would be a good idea.
So, I know that to render data, you have to pass the data from the CPU to the GPU using the function:
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * m_Vertices.size(), &m_Vertices[0], GL_STATIC_DRAW);
in this case I'm passing a std::vector
of floats to the array buffer, and since I also think that data that is generated in a compute shader is on the GPU, I was wondering if it would be possible to skip the step of passing the generated arrays (or vectors) generated on the compute shader to the CPU and then back to the GPU to draw and directly tell OpenGL to use a particular memory address to get the data to draw.
Is something like this possible or is there some mistake in my reasoning?