1

I'm trying to run a practice of D3D 11 rendering system to load and render FBX files but I have a problem transforming vertex in vertex shader.

I don't suppose what is wrong, in Visual Studio Graphics Debugger I can see the mesh passed to pipeline is OK in the input assembler stage but after vertex shader transformations all breaks out and render go wrong, if someone can tell me what's go wrong I'll appreciate the info.

View of the Input Assembler Stage

View of the Vertex Shader Output

This is the Vertex Shader code

cbuffer MatrixBuffer
{
    matrix worldMatrix;
    matrix viewMatrix;
    matrix projectionMatrix;
};

struct VertexInputType
{
    float4 position : POSITION;
    float3 normal : NORMAL;
    float2 uv : TEXCOORD;
};

struct PixelInputType
{
    float4 position : SV_POSITION;
    float3 normal : NORMAL;
    float2 uv : TEXCOORD;
};

PixelInputType TextureVertexShader(VertexInputType input)
{
    PixelInputType output;

    output.position = mul(input.position, worldMatrix);
    output.position = mul(output.position, viewMatrix);
    output.position = mul(output.position, projectionMatrix);

    output.normal = input.normal;
    output.uv = input.uv;

    return output;
}

And this is the code of matrix initialization

float lFieldOfView = 3.141592f * 0.4f;
float lScreenAspect = static_cast<float>(width_) / static_cast<float>(height_);

DirectX::XMMATRIX lProjection = MatrixDirectX::XMMatrixPerspectiveFovLHlFieldOfView, lScreenAspect, 1.0f, 10000.0f);
DirectX::XMStoreFloat4x4(&lMatrixBuffer.projectionMatrix, lProjectionMatrix);

DirectX::XMMATRIX lWorldMatrix = DirectX::XMMatrixIdentity();
DirectX::XMStoreFloat4x4(&lMatrixBuffer.worldMatrix, lWorldMatrix);

DirectX::XMFLOAT3 lookAtPos(0.0f, 0.0f, 0.0f);
DirectX::XMFLOAT3 eyePos(0.0f, 0.0f, -50.0f);
DirectX::XMFLOAT3 upDir(0.0f, 1.0f, 0.0f);
DirectX::FXMVECTOR lLookAtPos = DirectX::XMLoadFloat3(&lookAtPos);
DirectX::FXMVECTOR lEyePos    = DirectX::XMLoadFloat3(&eyePos);
DirectX::FXMVECTOR lUpDir     = DirectX::XMLoadFloat3(&upDir);

DirectX::XMMATRIX lViewMatrix = DirectX::XMMatrixLookAtLH(lEyePos, lLookAtPos, lUpDir);
DirectX::XMStoreFloat4x4(&lMatrixBuffer.viewMatrix, lViewMatrix);

D3D11_BUFFER_DESC lBufferDesc = { 0 };
lBufferDesc.ByteWidth      = sizeof(MatrixBufferType);
lBufferDesc.Usage          = D3D11_USAGE_DYNAMIC;
lBufferDesc.BindFlags      = D3D11_BIND_CONSTANT_BUFFER;
lBufferDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;

D3D11_SUBRESOURCE_DATA lMatrixBufferData;
lMatrixBufferData.pSysMem = &lMatrixBuffer;

hResult = D3DDevice_->CreateBuffer(&lBufferDesc, &lMatrixBufferData, &D3DMatrixBuffer_);
Peter O.
  • 32,158
  • 14
  • 82
  • 96
  • Typically you have the vertex input position be a ``float3`` instead of a ``float4``. You can use ``float4`` if you always set the w component to 1.0 in the vertex buffer, but you'd probably be better off using ``float3``. – Chuck Walbourn Nov 28 '16 at 04:24
  • Thanks for the help. Problem solved. Problem was multiplication order. I put first the vector and then the matrix on the mul operator, and the correct order to do multiply is reverse, first matrix then vector. Thanks!! – Juan Vicente Fernndez Rodrguez Nov 28 '16 at 06:40

1 Answers1

0

From the comment it looks like the issue actually might be a matrix row-major/column-major difference. The original HLSL code would probably work if the matrices would be transposed on the CPU side.

moradin
  • 305
  • 2
  • 6