0

Is there any scheme using WebGL which allows to process one data record to an previously unknown number of records?

Using OpenGL for example, a geometry program can be used to multiply vertices depending on their attributes, and thus output data of unknown length.

Is there any trick to use WebGL in a likewise fashion, or is this only possible on the JavaScript side?

dronus
  • 10,774
  • 8
  • 54
  • 80
  • Maybe it would be possible to compute the amount of new records for every input record by a fragment shader. If the output still keeps some key to the record (eg. write used texture coordinate to fragment), we could maybe grow or shrink the regions needed for the child records by a repeated region-growing like process. In every iteration, a pixel with high child count could be split onto pixels with zero child count, and the whole buffer enlarged, until there is no pixel with a child count >1. Ideally the amount of pixels linked to one input record resembles the child count then. – dronus Nov 17 '13 at 21:46
  • It would be however hard to determine if the operation is finished, or when to grow (eg. double the size) of the pixel buffer. – dronus Nov 17 '13 at 21:49

1 Answers1

0

Yup, there is no Geometry Shader in WebGL (just Vertex and Fragment).

So, yes, something multiplicative needs to be implemented on the JS side, by making more data or more calls to gl.drawTriangles/gl.drawElements.

One approach that might be applicable, is to have lots of data (triangles, say), and use the Fragment Shader to algorithmicly throw-away some or all of them. Kind of the opposite of multiplying. But if you keep the same triangles, and change their processing with uniforms, or perhaps smaller data in textures, you can at least save the hit of sending up lots of different data.

To "Throw away" a vertex, need to put it outside the NDC (the -1 to +1 unit cube), for all three vertices of a triangle.

david van brink
  • 3,604
  • 1
  • 22
  • 17
  • If using triangles, I can throw away unused multiplications by the vertex shader. Cool. However, the remaining ones will get a sparse distribution on rasterisation to the output buffer. So I will not spend fragment computation for the discarded ones, wich is cool, but I will have empty memory locations (eg. unrendered pixels) for the discarded ones. After rendering, I would need to collect the results from the sparely filled buffer in a very costly fashion I think. There is no easy way of repacking that cuts away all the empty pixels... – dronus Nov 17 '13 at 21:39
  • Is this for LOD stuff? Maybe mutually-exclusive sets of polys. Like Mipmaps for geometry. Still could be a win over pushing fresh vertexes on every draw. – david van brink Nov 18 '13 at 17:13
  • Yes a little, but no preprocessing is possible. This is for geometry refinement stuff, creating detail polygons on the fly like infinite recursive tesselation. Results will be cached, but CPU won't deliver enough performance to create. – dronus Nov 18 '13 at 23:18