2

The problem

I’m having trouble creating an Direct2D image source from a DXGI surface. I’m creating a Windows 10 UWP app in Visual Studio 2017, with the part that handles graphics written in Direct2D/3D in C++.

Basically, I get an array with luminance values from a YUV image that I for now want to draw as an black/white image. The values come once per frame in a stream, so I want to use a swapchain to efficiently render the data. With help from a Swapchain example i have managed to set up a swapchain that I could draw to when I tested with other data.

The code to create the image source is something like this:

void D2DPanel::DrawBackground(int32 dataPtr, int width, int height)
{
    D3D11_TEXTURE2D_DESC _d3d_texture_desc;
    _d3d_texture_desc.Width = width;
    _d3d_texture_desc.Height = height;
    _d3d_texture_desc.MipLevels = 1;
    _d3d_texture_desc.ArraySize = 1;
    _d3d_texture_desc.Format = DXGI_FORMAT::DXGI_FORMAT_R8_UNORM;    
    _d3d_texture_desc.Usage = D3D11_USAGE::D3D11_USAGE_DYNAMIC;
    _d3d_texture_desc.SampleDesc.Count = 1;
    _d3d_texture_desc.SampleDesc.Quality = 0;
    _d3d_texture_desc.BindFlags = 
    D3D11_BIND_FLAG::D3D11_BIND_SHADER_RESOURCE;
    _d3d_texture_desc.CPUAccessFlags = D3D11_CPU_ACCESS_FLAG::D3D11_CPU_ACCESS_WRITE;
    _d3d_texture_desc.MiscFlags = 0;

    D3D11_SUBRESOURCE_DATA _d3d_texture_data;
    _d3d_texture_data.pSysMem = (void*)IntPtr(dataPtr);
    _d3d_texture_data.SysMemPitch = width;

    ComPtr<ID3D11Texture2D> _d3d_texture;   
    DX::ThrowIfFailed(m_d3dDevice->CreateTexture2D(&_d3d_texture_desc, &_d3d_texture_data, &_d3d_texture)); 
    ComPtr<IDXGISurface> _dxgi_surface;
    DX::ThrowIfFailed(_d3d_texture.As(&_dxgi_surface));    
    IDXGISurface* surfaces[1] = { _dxgi_surface.Get() };                

    ComPtr<ID2D1ImageSource> d2d_image_source;

    //The following method call results in an exception
    //m_d2d_context2 is an ID2D1DeviceContext2
    DX::ThrowIfFailed(m_d2d_context2->CreateImageSourceFromDxgi(surfaces, 1,
        DXGI_COLOR_SPACE_TYPE::DXGI_COLOR_SPACE_RGB_FULL_G22_NONE_P709,
        D2D1_IMAGE_SOURCE_FROM_DXGI_OPTIONS::D2D1_IMAGE_SOURCE_FROM_DXGI_OPTIONS_LOW_QUALITY_PRIMARY_CONVERSION,
        &d2d_image_source));

    //….drawing, etc
}

When I call CreateImageSourceFromDXGI with the parameters above I get an exception with the following error message (error message description):

D2D DEBUG ERROR - The combination of alpha mode and DXGI format supplied
are not compatible with one another.

I can access the alpha mode connected to the ID2D1DeviceContext2 by calling for example auto foo = m_d2d_context2->GetPixelFormat(); I have tried setting the alpha mode for the render target bitmap created from the swapchain buffer to D2D1_ALPHA_MODE_PREMULTIPLIED (which it was in the example code my code is based on) and D2D1_ALPHA_MODE_IGNORE. D2D1_ALPHA_MODE_STRAIGHT resulted in an error when I tried to create the render target bitmap. I can see that the alpha mode really is changed through m_d2d_context2->GetPixelFormat(). Nothing has helped, I still get the same error message. When the swapchain is created alpha is set to DXGI_ALPHA_MODE_UNSPECIFIED and format is DXGI_FORMAT_B8G8R8A8_UNORM.

Before trying to create a black and white image, I successfully managed to create and render an image source from YUV data, where I used the DXGI-format DXGI_FORMAT_NV12 to create the D3D11_TEXTURE2D and the parameters DXGI_COLOR_SPACE_TYPE::DXGI_COLOR_SPACE_YCBCR_FULL_G22_NONE_P709_X601, D2D1_IMAGE_SOURCE_FROM_DXGI_OPTIONS::D2D1_IMAGE_SOURCE_FROM_DXGI_OPTIONS_LOW_QUALITY_PRIMARY_CONVERSION when calling CreateImageSourceFromDxgi.

My questions

  • Is the problem that I am using an alpha mode that is incompatible with DXGI_FORMAT_R8_UNORM? If so, which are compatible and which are not?
  • Does anyone have any idea what else could prevent me from creating the image source?
  • Does anyone know of an example where someone is trying to render a single-channel image in Direct2D from an array of pixel data?

Edit:

I made a workaround and created a bitmap instead of an image source.

//create texture etc, see code from original question...

//access _dxgi_surface connected to texture
ComPtr<IDXGISurface> _dxgi_surface;
DX::ThrowIfFailed(_d3d_texture.As(&_dxgi_surface)); 

//create bitmap from texture with y channel values
D2D1_BITMAP_PROPERTIES1 YPlaneBitmapProp =
    BitmapProperties1(
        D2D1_BITMAP_OPTIONS_NONE,
        PixelFormat(DXGI_FORMAT_R8_UNORM, D2D1_ALPHA_MODE_IGNORE),
        width, height
    );
ComPtr<ID2D1Bitmap1> YPlaneBitmap;
DX::ThrowIfFailed(m_d2d_context2->CreateBitmapFromDxgiSurface(_dxgi_surface.Get(), &YPlaneBitmapProp, YPlaneBitmap.GetAddressOf()));    

//create empty rgb bitmap 
D2D1_BITMAP_PROPERTIES RGBBitmapProp =
    BitmapProperties(PixelFormat(DXGI_FORMAT_B8G8R8A8_UNORM, D2D1_ALPHA_MODE_IGNORE), 96.0f, 96.0f);
ComPtr<ID2D1Bitmap> RGBBitmap;
DX::ThrowIfFailed(m_d2dContext->CreateBitmap(D2D1::SizeU(width, height), RGBBitmapProp, RGBBitmap.GetAddressOf())); 

//Draw Y plane values from YPlaneBitmap on RGBBitmap using a custom effect/pixel shader
m_backgroundEffect->SetInput(0, RGBBitmap.Get());
m_backgroundEffect->SetInput(1, YPlaneBitmap.Get());    

//Scale up image to size of render target
m_scaleEffect->SetInputEffect(0, m_backgroundEffect.Get()); 
m_scaleEffect->SetValue(D2D1_SCALE_PROP_SCALE, D2D1::Vector2F(((float)m_renderTargetWidth / width), (float)m_renderTargetHeight / height));

//Draw image
m_d2dContext->DrawImage(m_scaleEffect.Get());

I still have no idea what the problem was with the code to create an image source.

Lillemor Blom
  • 135
  • 12
  • 1
    Have you checked this: https://msdn.microsoft.com/en-us/library/windows/desktop/dd756766(v=vs.85).aspx#specifying_a_pixel_format_for_an_id2d1bitmap – VuVirt Oct 18 '17 at 09:10
  • What is the pixel format of your swapchain? According to this - https://msdn.microsoft.com/en-us/library/windows/desktop/dd756766(v=vs.85).aspx#supported_yuv_formats_for_dxgi_image_source - DXGI_FORMAT_R8_UNORM should be supported for image sources. I'm not sure why R8 is considered an YUV format in that section. Anyway, have you tried creating a bitmap instead of image source? – Anton Angelov Oct 19 '17 at 06:50
  • @VuVirt: I did check that, but I tried to create an image source which according to https://msdn.microsoft.com/en-us/library/windows/desktop/dd756766(v=vs.85).aspx#supported_yuv_formats_for_dxgi_image_source should support DXGI_FORMAT_R8_UNORM, if I'm not misinterpreting anything. – Lillemor Blom Oct 20 '17 at 07:46
  • @AntonAngelov: the swapchain is `DXGI_FORMAT_B8G8R8A8_UNORM`. I thought it wouldn't matter at this stage, since I tried to create an image source from the dxgi surface connected to the texture2d I had created. I think `DXGI_FORMAT_R8_UNORM` is considered a YUV format since that is a format that can be used when handling the Y-channel separately (which I'm trying to do), see the description of `DXGI_FORMAT_NV12` at https://msdn.microsoft.com/en-us/library/windows/desktop/bb173059(v=vs.85).aspx. I did manage to create a bitmap instead, I will update my question with that info.Thanks! – Lillemor Blom Oct 20 '17 at 07:57
  • @LillemorBlom It's possible that the device doesn't support the required D2D conversion for CreateImageSourceFromDxgi. Have you tried calling IsDxgiFormatSupported first, as described in the Remarks here: https://msdn.microsoft.com/en-us/library/windows/desktop/dn890791(v=vs.85).aspx ? – VuVirt Oct 20 '17 at 10:28
  • @LillemorBlom Can't you simply render the luminance from your NV12 texture using it as a DXGI_FORMAT_R8_UNORM shader resource view with a pixel shader by drawing a quad, instead of converting it to D2D resource first? Like this: https://github.com/Microsoft/Windows-universal-samples/blob/master/Samples/HolographicFaceTracking/cpp/Content/NV12VideoTexture.cpp – VuVirt Oct 20 '17 at 10:35
  • @VuVirt: `IsDxgiFormatSupported`return true for the R8_NORM pixel format. I will check out the example, although since I am totally new to directx and graphics programming in general I'm not super keen on abandoning my code for something new now that I got things working. Do you have the opinion that there are large benefits with using Direct3D instead of Direct2D/3D interop, like for example better performance? Also, thanks! – Lillemor Blom Oct 20 '17 at 12:02
  • @LillemorBlom if you simply want to render the texture then D2D will be useless overhead IMO. IsDxgiFormatSupported may return true for R8_UNORM, but maybe only to be used with YUV and multiple surfaces. – VuVirt Oct 20 '17 at 12:03
  • 1
    @VuVirt: ok thanks, I will consider ditching Direct2D since performance is important for my app. – Lillemor Blom Oct 20 '17 at 12:24
  • If this workaround is valid for you, you may consider adding it as an answer and accepting it. As for the performance, things may often be quite counter intuitive and only measurements can give certainty. D2D is meant to be an abstraction for 2D rendering on top of D3D. Doing the same thing on top of D3D that D2D does on top of D3D isn't guaranteed to give large performance benefits. – Anton Angelov Oct 20 '17 at 14:07

0 Answers0