-1

My DirectX11 C++ Engine uses uint16_t (short) for the vertex index buffer and all was working well.

I've evolved the models I use and now they have grown with over 64k indexes.

I've changed all references to my index buffer from short to uint32_t and the render was broken.

My variable defines are:

ID3D11Buffer        *IndexBuffer;     //DirectX Index Buffer
vector<int32_t>     primitiveIndices; //Vector array of indicies formally

I finally changed the line

Context->IASetIndexBuffer(IndexBuffer, DXGI_FORMAT_R16_UINT, 0); 

to

Context->IASetIndexBuffer(IndexBuffer, DXGI_FORMAT_R8G8B8A8_UINT, 0);

This was done to allow 32bit indexes. However it fails to render. I have also updated the

D3D11_BUFFER_DESC::ByteWidth

accordingly.

Any advice welcome.

Mark
  • 166
  • 1
  • 13

1 Answers1

3

What exactly do you think is the meaning of DXGI_FORMAT_R8G8B8A8_UINT as an index buffer format? If you check the documentation, you will find there are only two valid formats that IASetIndexBuffer() will accept. If your indices are std::uint32_t then the corresponding DXGI format to use would be DXGI_FORMAT_R32_UINT. Apart from that, I highly recommend to use a debug context and look at the debug output when debugging…

Michael Kenzel
  • 15,508
  • 2
  • 30
  • 39
  • Many thanks, that worked. I incorrectly thought the R8G8B8A8 section is purely a size indication eg 8+8+8+8 = 32bits. But I was obviously wrong. – Mark Sep 24 '18 at 02:47
  • 1
    The one other thing to keep in mind is that 32-bit indices require Direct3D Hardware Feature Level 9.2 or better. Not a major requirement, but something to know. See [this blog post](https://blogs.msdn.microsoft.com/chuckw/2012/06/20/direct3d-feature-levels/) and [MSDN](https://msdn.microsoft.com/library/windows/desktop/ff471324#IA_Index_Buffer) – Chuck Walbourn Sep 25 '18 at 06:54