0

My code uses a geometry shader to produce thick lines using this: https://forum.libcinder.org/topic/smooth-thick-lines-using-geometry-shader

(Uses geometry shader approach)

I get it work on my local machine using an Intel HD Graphics card. However, if I use the same settings on my destination machine the lines that are drawn with weird gaps.

I don't understand why, because on different Intel HD it works. Note that my target is a NVS 300 that is fairly old, but supports FL10_1 and Geometry shader I guess. The Intel devices I try it with might be a bit newer.

Since I force the feature level on my development on device creation to be 10_1 I expect no difference.

I don't see any error codes that could hint a arbitrary behavior in the output that would explain it, even if I set up native code debugging or remote debugging.

Does anyone have a clue why this behaves differently?

I could add images, but you basically see a thick sine curve on my local machine and a thick ,but fractioned with gaps, on the target.

Thanks in advance for any clues.

enter image description here enter image description here

cbuffer constBuffer
{
    float THICKNESS;
    float2 WIN_SCALE;
};

struct PSInput
{
float4 Position : SV_POSITION;
};

float2 toScreenSpace(float4 vertex)
{
//float2 WIN_SCALE = { 100.0f, 100.0f };
return float2(vertex.xy) * WIN_SCALE;
}

[maxvertexcount(7)]
void main(lineadj float4 vertices[4] : SV_POSITION, inout TriangleStream<PSInput> triStream)
{
    //float2 WIN_SCALE = { 100.0f, 100.0f };

    float2 p0 = toScreenSpace(vertices[0]); // start of previous segment
    float2 p1 = toScreenSpace(vertices[1]); // end of previous segment, start of current segment
    float2 p2 = toScreenSpace(vertices[2]); // end of current segment, start of next segment
    float2 p3 = toScreenSpace(vertices[3]); // end of next segment


    // perform naive culling
    float2 area = WIN_SCALE * 1.2;
    if (p1.x < -area.x || p1.x > area.x)
        return;
    if (p1.y < -area.y || p1.y > area.y)
        return;
    if (p2.x < -area.x || p2.x > area.x)
        return;
    if (p2.y < -area.y || p2.y > area.y)
        return;

    float2 v0 = normalize(p1 - p0);
    float2 v1 = normalize(p2 - p1);
    float2 v2 = normalize(p3 - p2);

    // determine the normal of each of the 3 segments (previous, current, next)
    float2 n0 = { -v0.y, v0.x};
    float2 n1 = { -v1.y, v1.x};
    float2 n2 = { -v2.y, v2.x};

    // determine miter lines by averaging the normals of the 2 segments
    float2 miter_a = normalize(n0 + n1); // miter at start of current segment
    float2 miter_b = normalize(n1 + n2); // miter at end of current segment

    // determine the length of the miter by projecting it onto normal and then inverse it
    //float THICKNESS = 10;
    float length_a = THICKNESS / dot(miter_a, n1);
    float length_b = THICKNESS / dot(miter_b, n1);

    float MITER_LIMIT = -1;
    //float MITER_LIMIT = -1;
    //float MITER_LIMIT = 1;

    PSInput v;
    float2 temp;

    //// prevent excessively long miters at sharp corners
    if (dot(v0, v1) < -MITER_LIMIT)
    {
        miter_a = n1;
        length_a = THICKNESS;

        // close the gap
        if (dot(v0, n1) > 0)
        {
            temp = (p1 + THICKNESS * n0) / WIN_SCALE;
            v.Position = float4(temp, 0, 1.0);
            triStream.Append(v);

            temp = (p1 + THICKNESS * n1) / WIN_SCALE;
            v.Position = float4(temp, 0, 1.0);
            triStream.Append(v);

            v.Position = float4(p1 / WIN_SCALE, 0, 1.0);
            triStream.Append(v);

            triStream.RestartStrip();

        }
        else
        {
            temp = (p1 - THICKNESS * n1) / WIN_SCALE;
            v.Position = float4(temp, 0, 1.0);
            triStream.Append(v);

            temp = (p1 - THICKNESS * n0) / WIN_SCALE;
            v.Position = float4(temp, 0, 1.0);
            triStream.Append(v);

            v.Position = float4(p1 / WIN_SCALE, 0, 1.0);
            triStream.Append(v);

            triStream.RestartStrip();
        }
    }

    if (dot(v1, v2) < -MITER_LIMIT)
    {
        miter_b = n1;
        length_b = THICKNESS;
    }

    // generate the triangle strip
    temp = (p1 + length_a * miter_a) / WIN_SCALE;
    v.Position = float4(temp, 0, 1.0);
    triStream.Append(v);

    temp = (p1 - length_a * miter_a) / WIN_SCALE;
    v.Position = float4(temp, 0, 1.0);
    triStream.Append(v);

    temp = (p2 + length_b * miter_b) / WIN_SCALE;
    v.Position = float4(temp, 0, 1.0);
    triStream.Append(v);

    temp = (p2 - length_b * miter_b) / WIN_SCALE;
    v.Position = float4(temp, 0, 1.0);
    triStream.Append(v);

    triStream.RestartStrip();

}
Jonas Bräuer
  • 183
  • 2
  • 15
  • Care to add a screenshot? – Mario Dec 13 '17 at 13:57
  • Added screenshots – Jonas Bräuer Dec 13 '17 at 14:04
  • Looks like z fighting and this as a precision issue would be a reasonable explanation on two different machines. Did you tried to disable all z checks for rendering? – Gnietschow Dec 13 '17 at 15:00
  • I set the CullMode to CullMode.None to have all triangles renderes explicit. I am not sure how and if I can disable other z checks. – Jonas Bräuer Dec 13 '17 at 15:11
  • Tried it with different RasterizerStateDescription and the values for depth bias there. Nothing changed. – Jonas Bräuer Dec 14 '17 at 11:37
  • If you use the same rasterizer state for alle drawings a change to the depth bias would apply to all the same changing nothing. You can use the DepthStencilState (https://msdn.microsoft.com/de-de/library/windows/desktop/ff476110(v=vs.85).aspx) to disable z testing while drawing. – Gnietschow Dec 17 '17 at 16:40
  • Tried it with DepthEnabled = false and StencilEnabled = false. No effect. – Jonas Bräuer Dec 18 '17 at 12:14
  • Did you port the shader code to hlsl (since you mention directX), if yes can you post it here? Or are you using Angle for translation? – mrvux Dec 21 '17 at 18:12
  • Also, if you run with Debug Device, do you have any warning or error messages in your output window? – mrvux Dec 21 '17 at 18:14
  • Yes I ported it. I did not find the error so far. It works without errors and enabled debug device on my Intel but not n my nvidia. No errors on remote debugging. I gave up the approach with this one and found another one that is hlsl already and I could tweak a bit for me needs. If anyone has a clue I would still try to solve this, because it produces less primitives and could thus improve my performance. So far no answer though. – Jonas Bräuer Dec 22 '17 at 07:19
  • Still suspect it some kind of imprecision for very tiny triangles or the rasterrule for those. Because in the nature of this algorithm it produces very tiny triangles if the data points are very close. Also they may not come exactly one after another. My data set is very close and a lot of points. – Jonas Bräuer Dec 22 '17 at 07:24
  • Added the code. It is work in progress code, so forgive bad naming or commented out things – Jonas Bräuer Dec 22 '17 at 07:30
  • I've had a similar problem, turned out the end vertices wasn't perfectly aligned with the start of the next segment. Perhaps the higher frame rate on the Nvidia card could make it visible. – Stefan Agartsson Dec 27 '17 at 09:04

0 Answers0