0

I am loading a PNG with transparency to a texture with the following code:

ComPtr<IWICStream> stream;
ComPtr<IWICBitmapDecoder> bitmapDecoder;
ComPtr<IWICBitmapFrameDecode> bitmapFrame;
ComPtr<IWICFormatConverter> formatConverter;
unsigned int width, height
D3D11_SUBRESOURCE_DATA resourceData;

ZeroMemory(&resourceData, sizeof(resourceData));

DX::ThrowIfFailed( m_wicFactory->CreateStream(&stream) );
DX::ThrowIfFailed( stream->InitializeFromMemory( rawFileBytes->Data, rawFileBytes->Length) );
DX::ThrowIfFailed( m_wicFactory->CreateDecoderFromStream( stream.Get(), nullptr, WICDecodeMetadataCacheOnDemand, &bitmapDecoder ) );
DX::ThrowIfFailed( bitmapDecoder->GetFrame(0, &bitmapFrame) );
DX::ThrowIfFailed( m_wicFactory->CreateFormatConverter(&formatConverter) );
DX::ThrowIfFailed( formatConverter->Initialize( bitmapFrame.Get(), GUID_WICPixelFormat32bppPBGRA, WICBitmapDitherTypeNone, nullptr, 1.0f /* some docs set this to 0.0f */, WICBitmapPaletteTypeCustom ) );
DX::ThrowIfFailed( bitmapFrame->GetSize(&width, &height) );

std::unique_ptr<byte[]> bitmapPixels(new byte[width * height * 4]);
DX::ThrowIfFailed( formatConverter->CopyPixels( nullptr, width * 4, width * height * 4, bitmapPixels.get() ) );

resourceData.pSysMem = bitmapPixels.get();
resourceData.SysMemPitch = width * 4;
resourceData.SysMemSlicePitch = 0;

CD3D11_TEXTURE2D_DESC textureDesc( DXGI_FORMAT_B8G8R8A8_UNORM, width, height, 1, 1 );
DX::ThrowIfFailed( m_d3dDevice->CreateTexture2D( &textureDesc, &resourceData, &texture2D ) );

if ( textureView != nullptr ) {
  CD3D11_SHADER_RESOURCE_VIEW_DESC shaderResourceViewDesc( texture2D.Get(), D3D11_SRV_DIMENSION_TEXTURE2D );
  DX::ThrowIfFailed( m_d3dDevice->CreateShaderResourceView( texture2D.Get(), &shaderResourceViewDesc, &shaderResourceView ) );
}

And to set up my blend state in a different section of code:

Microsoft::WRL::ComPtr<ID3D11BlendState1> blendState;

D3D11_BLEND_DESC1 desc;
ZeroMemory( &desc, sizeof( desc ) );
desc.IndependentBlendEnable = FALSE;
desc.AlphaToCoverageEnable = FALSE;
desc.RenderTarget[0].BlendEnable = TRUE;
desc.RenderTarget[0].LogicOpEnable = FALSE;
desc.RenderTarget[0].SrcBlend = D3D11_BLEND::D3D11_BLEND_SRC_ALPHA;
desc.RenderTarget[0].DestBlend = D3D11_BLEND::D3D11_BLEND_INV_SRC_ALPHA;
desc.RenderTarget[0].BlendOp = D3D11_BLEND_OP::D3D11_BLEND_OP_ADD;
desc.RenderTarget[0].SrcBlendAlpha = D3D11_BLEND::D3D11_BLEND_ONE;
desc.RenderTarget[0].DestBlendAlpha = D3D11_BLEND::D3D11_BLEND_ONE;
desc.RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP::D3D11_BLEND_OP_ADD;
desc.RenderTarget[0].RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL;

direct3d.device->CreateBlendState1( &desc, blendState.GetAddressOf() );
direct3d.context->OMSetBlendState( blendState.Get(), NULL, 0xffffffff );

With all this setup, I still get black backgrounds where there should be alpha.

Clarification: I have a set of 48 square panels that are being overlayed on each other from z 0.0f to 48.0f, but all I can see is the very front 48.0f texture. Rather than the transparent zones being transparent, they are being rendered as black.

Edit: Here is my pixel shader:

Texture2D Texture : register(t0);
SamplerState Sampler : register(s0);

struct sPSInput
{
    float4 pos : SV_POSITION;
    float3 norm : NORMAL;
    float2 tex : TEXCOORD0;
};

float4 SimplePixelShader(sPSInput input) : SV_TARGET
{
  float4 textured = Texture.Sample(Sampler, input.tex);
  return textured;
}
OzBarry
  • 1,098
  • 1
  • 17
  • 44

2 Answers2

1

This

    desc.RenderTarget[0].SrcBlend = D3D11_BLEND::D3D11_BLEND_ONE;

should be this

   desc.RenderTarget[0].SrcBlend = D3D11_BLEND::D3D11_BLEND_ALPHA;

also AlphaToCoverageEnable should be set false for your needs

Furthermore you have to disable the Z buffer. The enabled Z buffer prevents all objects behind the object in front to be drawn.

thumbmunkeys
  • 20,606
  • 8
  • 62
  • 110
  • I updated my above code snippet to what my blendState looks like now, this did not have any affect on my problem though :( – OzBarry Feb 14 '13 at 13:37
  • So, it ends up that I was able to see whatever panes I drew between -0.9 and 0.9, and they all got rounded to 0.0f and appear on the same z layer. As soon as I translate any model away from that range, they just disappear. – OzBarry Feb 14 '13 at 14:59
  • Yeah, I tried disabling the z buffer with no avail - it seems somewhere in my code there is something being set that prevents me from translating the z of my models. No idea why. I can post my whole init code if you think that will help. – OzBarry Feb 15 '13 at 13:35
  • that would help me helping you :) are you sure the textures have an alpha channel? – thumbmunkeys Feb 15 '13 at 15:07
  • Here is my [init code](https://gist.github.com/anonymous/4bbf109900cd744458b0). The textures definitely have alpha channel. I did a slightly hackish and static fix for now where I specified the z coordinates directly in the model, as they are currently just flat planes right now - I did this because I need to demo this application in two weeks, so I just need something that works for now, and I can always go back and fix the translation issue. Here are my [relevant matrix classes](https://gist.github.com/anonymous/f9507cf69d93d52e97b6). Camera and Projection work fine, so I didn't upload. – OzBarry Feb 15 '13 at 15:46
  • With the hard-coded z in the models, the alpha works flawlessly. – OzBarry Feb 15 '13 at 15:47
  • so your rectangles are probably getting clipped, because they are outside of the viewing frustum? – thumbmunkeys Feb 15 '13 at 17:00
  • I position my camera at -1000.0f, my znear/far is 0.1f to 2000.0f, my objects are at z 0.0f. I can translate the x and y without any issues. I have no idea why they would be getting clipped if they are. – OzBarry Feb 15 '13 at 17:50
  • -1000 is quite far away, unless you have huge objects. but i think your problem is not some depth/blend state, but the viewprojection matrix. do you invert it before you render the objects? also posting vertex shader code would help – thumbmunkeys Feb 15 '13 at 18:53
  • Yes, you can see the camera being inverted in the [relevant matrix classes](https://gist.github.com/anonymous/f9507cf69d93d52e97b6#file-universe-cpp-L34) I linked to before, as well as the [vertex shader](https://gist.github.com/anonymous/f9507cf69d93d52e97b6#file-vertexshader-hlsl). I just [uploaded my camera matrix class here](https://gist.github.com/anonymous/849dd50ba6dc82fc0bc9). – OzBarry Feb 15 '13 at 18:58
  • I am really not sure about this, but its worth a try: multiply the matrices in your vertex shader in the reverse order. I think that is necessary because you transpose the matrices before multiplication (A*B)T = BT * AT. – thumbmunkeys Feb 15 '13 at 20:08
  • When I reverse that, I can't see anything that's being rendered. – OzBarry Feb 15 '13 at 21:45
  • hmm, then I am out of ideas. But I am pretty certain that the order of your multiplications is wrong – thumbmunkeys Feb 15 '13 at 22:36
0

You need to normalize your Z-values to the range 0.0f (near) - 1.0f (far).

Are you setting up your ViewPort properly?

D3D11_VIEWPORT Viewport =
{
    0.0f,                           // TopLeftX
    0.0f,                           // TopLeftY
    <width>,                        // Width
    <height>,                       // Height
    0.0f,                           // MinDepth
    1.0f };                         // MaxDepth
d7samurai
  • 3,086
  • 2
  • 30
  • 43