0

I currently have implemented an OpenGL 3.3 3D environment renderer rendering a (static) block of terrain, and I've been tasked with adding an overlay of statistical data to it; setting specific pixel colours on the terrain based on data values at each point.

The data in question is effectively supplied in the form of a black box in my C++ code base; I can input an X,Y pair of doubles (in worldspace), and it'll output a data value for that location (the terrain does have a third dimension, but the data is not concerned about that). The data in question is time-varying; on changing the time co-ordinate, the scene is expected to update with the data corresponding to the new co-ordinate.

I have a first implementation; the obvious one, where on creating each vertex the appropriate data value for that location is looked up in the black box and encoded in a dynamic buffer accompanying it, with the buffer updated as the time co-ordinate changes. This works perfectly in itself; it's fast to update, and the data is rendered as expected.

However, it's only got data points per-vertex, with simple interpolation across the polygon, and the question's been raised as to whether it's possible to instead render the data per-pixel.

I'm struggling with this. I can't realistically implement the black box behaviour directly in the shaders; it's a large, complex function that I don't fully understand myself (hence representing it here as a black box!), and it requires referencing multiple data sources. There was a version early on - before I looked into the project - that rendered the entire scene in our (separate, non-OpenGL, 2D), top-down environment renderer at an extremely high resolution and applied that as a texture to the mesh - but that's both cripplingly slow and still not true per-pixel data, you can still zoom to a point where the resolution breaks down.

I'm not currently using deferred rendering, but I'm wondering if I can use similar principles to that. One thing I'm considering currently is whether - during the render process - there's a way I can store worldspace X and Y data per-pixel in a buffer (stencil? G-? Arbitrary render target?), and then - back in the C++ environment - generate an overlay texture per frame based on those accumulated X and Y values - but I'm somewhat put off by the notion that that'd require double-precision, and lots of what I've seen suggests steering clear of any double calculations in GLSL; again, I'm worried about speed (although is a simple passthrough and interpolation of double-precision data less impactful?)... plus I'm not entirely sure that what I'm suggesting is even possible!

I may be overcomplicating this somewhat, though, there may be far simpler solutions that aren't in my frame of reference yet, so I'm curious to hear if there's any suggestions for better solutions, or if it's unrealistic.

(While I'm currently using 3.3, a solution requiring 4+ is not off the table)

  • What I'm thinking is to create a two-pass program: The first pass having a fragment shader which just renders the interpolated texture-coordinates onto your terrain (encoded as colors). Then you have a raster image with essentially a list of what actual coordinates you will have to query from your blackbox. Encode the results in a texture with the same dimensions as the output from the first pass. Then, pass this texture to a second step, in which the fragment shader looks up the values for the fragments based on the interpolated texture coordinates. Does this make sense? – Steeve Jan 19 '17 at 17:10
  • This appears similar to my musings about outputting the worldspace X and Y data to a buffer, but that had associated precision worries. This, though, I think I'm not clear on how I get from the interpolated texture co-ordinates embedded in the image to the worldspace co-ordinates for the blackbox. – Matt Clemson Jan 19 '17 at 23:20

0 Answers0