2

I found this post, which is sort of answering my question, but not completely:

How to drag the line segment by selecting the vertex

What I am trying to resolve: a modern way (using OpenGL 4 architecture) of performing vertex selection:

  • a user draws a selection (rubber band, that's not what I am after)
  • I assume at this stage I can give some sort of unique ID to each vertex in the model
  • the selected region is somehow re-rendered, off-line using some sort of vertex/fragment shader that would store the vertices in a buffer that I could read back?

I am just guessing it involves some sort of buffer in which the shaders can write the result of the rendered vertices in, buffer which is readable in the program. I wonder if someone has done that already or could at least orient me in the right direction. It needs to be (ideally) fast (working on very large models), and use OpenGL4 (no deprecated features such as GL_SELECT, etc.).

  • ideally, I'd also need to use the same technique to select other components such as edges and faces.
Community
  • 1
  • 1
user18490
  • 3,546
  • 4
  • 33
  • 52
  • I think you could use a technique like the one described here: http://rastergrid.com/blog/2010/02/instance-culling-using-geometry-shaders/ If you want to use the depth buffer to determine visibility, you would sample it in the vertex shader at the projected position of each point and pass that info to the geometry shader which decides whether or not to emit the point to the feedback buffer. I don't think the technique could be used for edges or faces. – GuyRT Jun 30 '14 at 12:34
  • Thank you, this is helpful. I will have a look and post my answer here when I have a working solution (if nobody answers before). But this is a good start. – user18490 Jun 30 '14 at 15:03

1 Answers1

2

Quite frankly: OpenGL is about getting things drawn to the screen, not scene management or selection.

You can use modern OpenGL to implement selection using transform feedback buffers and abusing a FBO as a vertex ID buffer.

But I'd really not use OpenGL for that. At least not the drawing pipeline. If there's need for GPU acceleration I'd use OpenCL or OpenGL Compute Shaders to transform the subset of vertices I'm interested in into screen space and to build a screen space 2D Kd-tree from them. Then using that KD-tree perform a nearest neighbor/boundary search to find which vertices are within the selection. If OpenCL or OpenGL compute shaders are not available you can do the transformation on the CPU as well.

datenwolf
  • 159,371
  • 13
  • 185
  • 298
  • That's great and very useful. I think you are not the first person to insist on the fact that OpenGL is not a scene graph management system. Point taken. The problem with the approach you suggest for instance, is that it wouldn't deal with depth sorting. For example if I want to select the vertices which are visible only, the 2D k-d tree approach wouldn't work! Which is the reason I was wondering if an approach involving rendering wouldn't be better? – user18490 Jun 30 '14 at 12:05
  • @user18490: You can of course read back the depth buffer of the rendering you're selecting "into" and test of the found vertices pass the depth test. – datenwolf Jun 30 '14 at 13:21
  • yes that's possible. All this sounds good but I have a hard time to believe that modelling tools whether Rhino, Maya or Blender use this approach. Having to create an acceleration structure seems like a lot of work for this. What happens if you edit the points? You need to rebuild the id-tree, etc. it's crazy. – user18490 Aug 12 '14 at 21:47
  • @user18490: Screen space Kd-trees are built very quickly. Also you don't have to balance the tree for selection. But you definitely want a spatial subdivision, otherwise the O(n) of a naive selection mechanism will bite you. Blender uses a mixed approach: In Z-shaded mode the depth of the fragment under the pointer is tested against the candidate vertices and only vertices passing the depth test are considered. – datenwolf Aug 12 '14 at 22:07
  • you still to update the structure each time you move a point. I believe you, you seem to know about blender. I would assume things would be a little more involved if you want to only render or select vertices with a specific region. I think that odd that there's no clear tutorial on such a basic/important feature (can't find how to keep the @ sign at the front) – user18490 Aug 12 '14 at 22:20
  • @user18490: Actually you'd have to update the structure every time you move the point of view. The idea of of a screen space 2D Kd-tree is to do it all in viewport coordinates. I.e. you don't bother with casting a ray into the scene, but you simply select the candidate objects based on their location in the 2D viewport. Building the 2D Kd-tree can be done in the frame setup phase in which pre-rendering steps, like frustum culling, depth sorting of transparent geometry and such happens. – datenwolf Aug 12 '14 at 23:02
  • @user18490: Also modern rendering pipelines use occlusion queries, which in combination of a screen space Kd-tree are a powerfull tool to enhance rendering performance, by discarding geometry, if their 2D Kd-branch becomes completely occluded. If it's not a Kd-tree then a different spatial subdivision, but most rendering pipelines will keep some screen space representation of their rendering pipeline for various purposes. And with such a structure already in place you get screen space based selection for free. – datenwolf Aug 12 '14 at 23:04