4

What is the purpose of semantics?

if I had a vertex layout like this:

struct VS_Input
{
    float4 position : COLOR;
    float4 color : POSITION;
};

Would it actually matter that I reversed the semantics on the two members?

If I have to send Direct3D a struct per vertex, why couldn't it just copy my data as is?

If I provide direct3D with a vertex with a layout that doesn't match that of the shader, what will happen? for example, if I pass the following vertex into the above shader?

struct MyVertex
{
    Vec4 pos;
    Vec2 tex;
    Vec4 col;
};

In the D3D documentation, it said that a warning will be produced, and that my data will be "reinterpreted"

Does that mean "reinterpreted" as in reinterpret_cast<>? like, my shader will try to use the texture coordinates and half of the color as the color in the shader? or will it search my vertex layout for the element that matches each semantic and shuffle the input into the right places to make the shader work?

And if the above is not true, then why does D3D require an explicit vertex layout?

CuriousGeorge
  • 7,120
  • 6
  • 42
  • 74

1 Answers1

4

Semantics are used to bind your vertex buffers to your shader inputs. In D3D11 you have buffers which are just chunks of memory to store data in, shaders which have an input signature describing the inputs they expect and input layouts which represent the binding between buffers and shaders and describe how the data in your buffers is to be interpreted. The role of the semantic is just to match elements in the buffer layout description with the corresponding shader inputs, the names are not really important as long as they match up.

It's up to you to correctly specify the layout of your vertex data when you create an input layout object. If your input layout doesn't match the actual layout in memory of your data then it will be effectively like using reinterpret_cast and you'll render garbage. Providing your semantics match up correctly between your input elements and your shader input however they will be correctly bound and things like the order of elements don't matter. It's the semantics that describe how data elements from the vertex buffer are to be passed to the inputs of a shader.

mattnewport
  • 13,728
  • 2
  • 35
  • 39
  • Thanks. I was assuming vertex layout and shader struct had to correspond 1 to 1. Also, the fact that you have to provide a name for the variables AND a semantic seemed kind of redundant, but when I think about it, I suppose using "pos" rather than "POSITION" throughout my shader code is a bit more comfortable. – CuriousGeorge May 27 '13 at 19:13
  • 2
    The reason for decoupling vertex layout and shader input layout is that you may want to use the same vertex data with different shaders that require different inputs. A common example is a depth pre-pass or shadow rendering pass where you only need access to position in the shader. By storing position in its own buffer you avoid wasted memory bandwidth fetching other vertex components in your depth only shader. When you later render the object in your lighting / shading pass you likely need other vertex components like texture coordinates, normal, etc. – mattnewport May 27 '13 at 20:45
  • @albundy You may find it instructive to compile a simple shader to assembly and have a look at the generated ASM file (by calling fxc.exe with the /Fc switch). When you do this, the compiler will write a long comment section with interesting information on how it linked up the various buffers. Then you can start playing with the inputs and format specifications and how they interact. – Justin R. Jun 11 '13 at 19:03