2

I think I'm experiencing precision issues in the pixel shader when reading the texcoords that's been interpolated from the vertex shader.

My scene constists of some very large triangles (edges being up to 5000 units long, and texcoords ranging from 0 to 5000 units, so that the texture is tiled about 5000 times), and I have a camera that is looking very close up at one of those triangles (the camera might be so close that its viewport only covers a couple of meters of the large triangles). When I pan the camera along the plane of the triangle, the texture is lagging and jumpy. My thought is that I am experiencing lack of precision on the interpolated texcoords.

Is there a way to increase the precision of the texcoords interpolation?

My first though was to let texcoord u be stored in double precision in the xy-components, and texcoord v in zw-components. But I guess that will not work since the shader interpolation assumes there are 4 separate components of single-precision, and not 2 components of double-precision?

If there is no solution on the shader side, I guess I'll just have to tesselate the triangles into finer pieces? I'd hate to do that just for this issue though.. Any ideas?

EDIT: The problem is also visible when printing texcoords as colors on the screen, without any actual texture sampling at all.

shadow_map
  • 313
  • 3
  • 15
  • 1
    Have you verified that the floating point precision of the texcoords is actually the issue? I'm not sure that it is, but allow me a guess: When sampling a texture, the texture coordinates are converted to _fixed point_, typically with 8 bit of fractional precision. That means that there are only 256 _discrete_, equidistant locations between two texels where you can sample for. Or, to put it another way: YOu will get banding artifacts if you magnify a texture by more than a factor of 256. – derhass Jun 29 '15 at 18:38
  • I have had a similar problem a few days ago. I stopped the banding by making the texture bigger (but yeah that's a very stupid solution). Any advice on how to avoid the problem @derhass? – Jerem Jun 30 '15 at 07:24
  • The problem is not texture sampling. I tried just outputting the uv coordinates to the returned pixel shader COLOR, without any texture sampling, and still got the same problem. – shadow_map Jun 30 '15 at 07:55
  • I don't magnify the texture. Since the texture coordinates ranges from 0-5000 as the vertex positions, the texture is tiling 5000 times, meaning that from my close-up camera the texture is pretty well aligned with the viewport size. – shadow_map Jun 30 '15 at 08:00
  • The visual problem I am experiencing is not really banding. It's more like the texture coordinates are lagging with respect to camera motion. For example, if I pan the camera 5 cm over 100 frames, the texture coordinate (or sampled texture color) of each pixel does not appear to change each frame, but instead like each 20th frame, i.e. each 1 cm of camera movement. (these numbers are just examples to get the idea, not actual measurements). – shadow_map Jun 30 '15 at 08:09
  • @shadow_map You should probably edit your post to explicitly say that your texture tiles 5000 times (I understood it wrong, as did derhass I think). – Jerem Jun 30 '15 at 08:31
  • It's already in the post. I can try to make it clearer though. Sorry. – shadow_map Jun 30 '15 at 08:43

1 Answers1

2

You're right, it looks like a precision problem. If your card supports it, you can indeed use double precision floats for interpolation. Just declare the variables as dvec2 and it should work.

The shader interpolation does not assumes there are 4 separate 8bit components. In recent cards, each scalar (ie. component in a vec) is interpolated separately as a float (or a double). Older cards, that could only interpolate vec4s, were also working with full floats (but these ones probably don't support doubles).

Jerem
  • 1,725
  • 14
  • 24
  • Ok. Is that GLSL syntax? I'm using cg. Changing from float2 to double2 compiles but causes the same problems. After some testing I found out that cg might treat double as float, confirmed here: "Cg allows profiles to omit run-time support for int and other integer types. Cg allows profiles to treat double as float." [link](http://http.developer.nvidia.com/Cg/Cg_language.html). I might be on a too low profile (ps_3_0), but it seems there are no overloads taking double anywhere in the cg api. Anyway, I tried tesselating the mesh and that works for now. Thanks for the response. – shadow_map Jul 01 '15 at 08:54
  • I'm a bit confused about your last paragraph. Just to make it clear; could I solve the interpolation precision problem if I'm on an older card/profile that does not support dvec2? – shadow_map Jul 01 '15 at 09:00
  • 1
    What I meant is just that all cards (old and new) use 32 (or 24) bits floats for interpolation (and never less). But apparently, a 32 bits floats is not enough in your case, so tessellating is the only solution. – Jerem Jul 01 '15 at 13:08