1

Let's suppose I have some points p1,p2, p3 and p4. I need to apply some transformations to each them in the Geometry Shader phase based on its successor, so my GS would require having access to the pairs (p1, p2), (p2, p3), (p3, p4). How can I achieve this? If I use the POINTS primitive I can only gain access to a single point at a time.

Please also note that this is a simplification, since in pratice I would need to have four points at a time, placed like the vertices of a cube. I have thought of using something like a line strip, but it doesn't provide enough points...

EDIT: To clarify, what I am actually trying to achieve is to have the CPU send a "cubic lattice" (?) to the GPU expressed as a set of points. My GS will have to take four of this points at a time, each representing one cube's vertex, and output triangles based on the attributes of these points.

  • You can read the number of vertices that a GS can receive [here](https://www.khronos.org/opengl/wiki/Geometry_Shader#Primitive_in.2Fout_specification) – Ripi2 May 18 '17 at 19:47
  • Seems like LINES_ADJACENCY should do the trick for me, but from the little info I found from a quick search on google, it seems impossible to specify the order in which the lines are passed... That is, I cannot have 4 separate points passed to two separate instances of the Geometry Shader... – Francesco Bertolaccini May 18 '17 at 19:53
  • The GS receives the vertices you define as "layout(type) in" command in shader, which **must match the type for the Vertex Shader**. If you need sparse vertices at once then use an attribute in the VS containing the positions in another buffer (or a texture) with all the vertices and get rid of the GS, which is usually slow. – Ripi2 May 18 '17 at 20:08
  • I need to use a GS because I will generate triangles based on these points, but I need four of them to be able to do that. I could do it all on the CPU, but it is an inherently parallel process and I wanted to leverage the power of the GPU to do it. – Francesco Bertolaccini May 19 '17 at 15:07
  • @FrancescoBertolaccini something like this: [GLSL rendering 2D cubic bezier curves](https://stackoverflow.com/a/60113617/2521214) ? – Spektre Feb 10 '20 at 15:31
  • @Spektre My objective at the time was executing marching cubes/tetrahedra on the GPU, but I've long since gave up :) – Francesco Bertolaccini Feb 10 '20 at 20:41

1 Answers1

0

Let's say you have your 3D lattice in a buffer. You know the order (e.g. by rows). So you know in advance how to extract the four points needed in each iteration. For a regular grid, you know the stride between points. Thus you can use glVertexAttribPointer() with the right stride parameter.

You can also use indexed buffer and glDrawElements.

Another aproach, likely slower, is to use four buffers bound in the same VAO and read with different attributes.

The command can be glDrawArray(GL_POINTS,...). Or even you can try instanced drawing and use the instace ID as an indication to the lattice location.

The thing is that glDrawXXXwill read from the bound buffers the number of times you specify. Each time you can read your four points.

Whatever you use, you get four points in the VS that you can pass to the GS.

Ripi2
  • 7,031
  • 1
  • 17
  • 33
  • Gonna try this as soon as I can! Sometimes the simplest solutions just pass before my eyes :( – Francesco Bertolaccini May 19 '17 at 16:46
  • I am having trouble implementing your solution: if I write something like `layout(points) in; layout(triangle_strip, max_vertices=12) out; in VertexData { float weight; } VertexIn[4];` The GLSL won't compile as "OpenGL requires geometry input array size to match input primitive size"... And I am not really sure on how to access a buffer from GLSL... – Francesco Bertolaccini May 20 '17 at 16:10