3

I tried various ways of getting the interpolation work and generate a perspective correct image, but none of the suggested ways works.

My current code is:

struct VertexStruct
{
  float4 normalizedPosition [[ position ]];
  float4 texCoord;
}

vertex VertexStruct testVertex(device float4 *vertices [[ buffer(0) ]],
                                     uint vid [[ vertex_id ]])
{
  VertexStruct outVertices;

  outVertices.normalizedPosition = ...;

  outVertices.texCoord = float4(vertices[vid].x, vertices[vid].y, 0.0, 127.0);

  return outVertices;
}

fragment half4 testFragment(VertexStruct inFrag [[ stage_in ]],
                            texture2d<half, access::sample> volume [[ texture(0) ]])
{
  float2 texCoord = float2(inFrag.texCoord.xy * (1.0 / inFrag.texCoord.w));
  constexpr sampler s(coord::normalized, filter::linear, address::clamp_to_zero);
  return volume.sample(s, texCoord);
}

Texture is 128x128 and vertices are:

  • 0,0
  • 0,127
  • 127,0
  • 127,127

In theory, the 4th parameter, w, should help with perspective correct interpolation. This trick has been reported to work in OpenGL ES, but in my case if I have it or not, nothing changes.

What I get is:

enter image description here

Blue is the clear color, while anything inside the black trapezoid is the content of the texture. The red trapezoid should be centered in the black trapezoid. If you open it with Photoshop and trace the diagonal of the black trapezoid, it will pass exactly true the diagonal of the red trapezoid.

enter image description here

It means that the two triangles that are rendered don't read the texture with perspective correct sampling. It is the classic problem of texture mapping, but I can't get my head around it.

Can anyone see the error?

aledalgrande
  • 5,167
  • 3
  • 37
  • 65

1 Answers1

3

Perspective-correct texture mapping has been the norm for well over a decade and is defaulted to by every graphics API, especially Metal. You don't have to do anything to get perspective-correct results, provided that you're not doing something completely wrong in your setup.

In fact, Metal allows you to specify both interpolation and sampling directly in MSL. By default, everything uses the center_perspective qualifier (pixel center, perspective correct interpolation, with the obvious exception of the position which is center_no_perspective.

Given that the data coming from the rasterizer into your fragment function is stage_in, you just add in the desired sampling/interpolation qualifier like any other attribute after the declaration of the members directly in your MSL code. Example of an explicit verbose specification:

struct Fragment {

    float4 position             [[ center_no_perspective ]];    // Default, can be omitted.
    float4 specialTreatment     [[ centroid_perspective ]];     // Let's say you need centroid + perspective correct.
    float2 textureCoordinates   [[ center_perspective ]];       // Default, can be omitted.

};

As for your other options, it is basically center/centroid/sample and perspective/no_perspective combinations, along with flat (no interpolation, obviously for integers). In a list, it looks like:

  • [[ flat ]] No interpolation, integer members.
  • [[ center_perspective ]] Center, perspective-correct (default)
  • [[ center_no_perspective ]] Center, linear (default for position)
  • [[ centroid_perspective ]] Centroid, perspective-correct
  • [[ centroid_no_perspective ]] Centroid, linear
  • [[ sample_perspective ]] Sample, perspective-correct*
  • [[ sample_no_perspective ]] Sample, linear*

Warning: sample_* qualifiers will make the fragment function execute per sample, rather than the usual per fragment. Going subpixel is a potential performance red flag, so use when necessary.

Elim Garak
  • 321
  • 1
  • 4
  • 9
  • Hey @elim garak, I will mark your answer as correct, because if we are talking about texture mapping this is indeed correct. I found out that if I projected my vertices on a plane in the vertex shader and then tried to sample the texture in the fragment shader (which is called _perspective correct interpolation_) you would need a ad-hoc homography to properly read the texture. If I instead calculated the NDC coordinates for the points and left the GPU do the work, it all went well. – aledalgrande Oct 16 '15 at 17:38
  • Yes, in that case you'd need custom work. But it is always best to utilize the underlying pipeline as much as possible, if possible. Good luck! – Elim Garak Oct 16 '15 at 17:58