5

I'm an experienced programmer specialized in Computer Graphics, mainly using Direct3D 9.0c, OpenGL and general algorithms. Currently, I am evaluating Direct2D as rendering technology for a professional application dealing with medical image data. As for rendering, it is a x64 desktop application in windowed mode (not fullscreen).

Already with my very initial steps I struggle with a task I thought would be a no-brainer: Rendering a single-channel bitmap on screen.

Running on a Windows 8.1 machine, I create an ID2D1DeviceContext with a Direct3D swap chain buffer surface as render target. The swap chain is created from a HWND and buffer format DXGI_FORMAT_B8G8R8A8_UNORM. Note: See also the code snippets at the end.

Afterwards, I create a bitmap with pixel format DXGI_FORMAT_R8_UNORM and alpha mode D2d1_ALPHA_MODE_IGNORE. When calling DrawBitmap(...) on the device context, a debug break point is triggered with the debug message "D2d DEBUG ERROR - This operation is not compatible with the pixel format of the bitmap".

I know that this output is quite clear. Also, when changing the pixel format to DXGI_FORMAT_R8G8B8A8_UNORM with DXGI_ALPHA_MODE_IGNORE everything works well and I see the bitmap rendered. However, I simply cannot believe that! Graphics cards support single-channel textures ever since - every 3D graphics application can use them without thinking twice. This goes without speaking.

I tried to find anything here and at Google, without success. The only hint I could find was the MSDN Direct2D page with the (supported pixel formats). The documentation suggests - by not mentioning it - that DXGI_FORMAT_R8_UNORM is indeed not supported as bitmap format. I also find posts talking about alpha masks (using DXGI_FORMAT_A8_UNORM), but that's not what I'm after.

What am I missing that I can't convince Direct2D to create and draw a grayscale bitmap? Or is it really true that Direct2D doesn't support drawing of R8 or R16 bitmaps??

Any help is really appreciated as I don't know how to solve this. If I can't get this trivial basics to work, I think I'd have to stop digging deeper into Direct2D :-(.

And here is the code snippets of relevance. Please note that they might not compile since I ported this on the fly from my C++/CLI code to plain C++. Also, I threw away all error checking and other noise:

Device, Device Context and Swap Chain Creation (D3D and Direct2D):

// Direct2D factory creation
D2D1_FACTORY_OPTIONS options = {};
options.debugLevel = D2D1_DEBUG_LEVEL_INFORMATION;
ID2D1Factory1* d2dFactory;
D2D1CreateFactory(D2D1_FACTORY_TYPE_MULTI_THREADED, options, &d2dFactory);

// Direct3D device creation
const auto type = D3D_DRIVER_TYPE_HARDWARE;
const auto flags = D3D11_CREATE_DEVICE_BGRA_SUPPORT;
ID3D11Device* d3dDevice;
D3D11CreateDevice(nullptr, type, nullptr, flags, nullptr, 0, D3D11_SDK_VERSION, &d3dDevice, nullptr, nullptr);

// Direct2D device creation
IDXGIDevice* dxgiDevice;
d3dDevice->QueryInterface(__uuidof(IDXGIDevice), reinterpret_cast<void**>(&dxgiDevice));
ID2D1Device* d2dDevice;
d2dFactory->CreateDevice(dxgiDevice, &d2dDevice);

// Swap chain creation
DXGI_SWAP_CHAIN_DESC1 desc = {};
desc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
desc.SampleDesc.Count = 1;
desc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
desc.BufferCount = 2;

IDXGIAdapter* dxgiAdapter;
dxgiDevice->GetAdapter(&dxgiAdapter);
IDXGIFactory2* dxgiFactory;
dxgiAdapter->GetParent(__uuidof(IDXGIFactory), reinterpret_cast<void **>(&dxgiFactory));

IDXGISwapChain1* swapChain;
dxgiFactory->CreateSwapChainForHwnd(d3dDevice, hwnd, &swapChainDesc, nullptr, nullptr, &swapChain);

// Direct2D device context creation
const auto options = D2D1_DEVICE_CONTEXT_OPTIONS_NONE;
ID2D1DeviceContext* deviceContext;
d2dDevice->CreateDeviceContext(options, &deviceContext);

// create render target bitmap from swap chain
IDXGISurface* swapChainSurface;
swapChain->GetBuffer(0, __uuidof(swapChainSurface), reinterpret_cast<void **>(&swapChainSurface));
D2D1_BITMAP_PROPERTIES1 bitmapProperties;
bitmapProperties.dpiX = 0.0f;
bitmapProperties.dpiY = 0.0f;
bitmapProperties.bitmapOptions = D2D1_BITMAP_OPTIONS_TARGET | D2D1_BITMAP_OPTIONS_CANNOT_DRAW;
bitmapProperties.pixelFormat.format = DXGI_FORMAT_B8G8R8A8_UNORM;
bitmapProperties.pixelFormat.alphaMode = D2D1_ALPHA_MODE_IGNORE;
bitmapProperties.colorContext = nullptr;
ID2D1Bitmap1* swapChainBitmap = nullptr;
deviceContext->CreateBitmapFromDxgiSurface(swapChainSurface, &bitmapProperties, &swapChainBitmap);


// set swap chain bitmap as render target of D2D device context
deviceContext->SetTarget(swapChainBitmap);

D2D single-channel Bitmap Creation:

const D2D1_SIZE_U size = { 512, 512 };
const UINT32 pitch = 512;
D2D1_BITMAP_PROPERTIES1 d2dProperties;
ZeroMemory(&d2dProperties, sizeof(D2D1_BITMAP_PROPERTIES1));
d2dProperties.pixelFormat.alphaMode = D2D1_ALPHA_MODE_IGNORE;
d2dProperties.pixelFormat.format = DXGI_FORMAT_R8_UNORM;
char* sourceData = new char[512*512];

ID2D1Bitmap1* d2dBitmap;
deviceContext->DeviceContextPointer->CreateBitmap(size, sourceData, pitch, d2dProperties, &d2dBitmap);

Bitmap drawing (FAILING):

deviceContext->BeginDraw();
D2D1_COLOR_F d2dColor = {};
deviceContext->Clear(d2dColor);

// THIS LINE FAILS WITH THE DEBUG BREAKPOINT IF SINGLE CHANNELED
deviceContext->DrawBitmap(bitmap, nullptr, 1.0f, D2D1_INTERPOLATION_MODE_LINEAR, nullptr);  

swapChain->Present(1, 0);
deviceContext->EndDraw();
MisterX
  • 51
  • 2
  • Is there really no one out there who ever had to draw grayscale bitmaps? I really don't know how to proceed with this issue and would be very thankful for any hint :)! – MisterX Jun 19 '17 at 09:26

1 Answers1

1

From my little experience, Direct2D seems very limited, indeed.

Have you tried Direct2D effects (ID2D1Effect)? You can write your own [it seems comparatively complicated], or use one of the built-in effects [which is rather simple].

There is one called Color matrix effect (CLSID_D2D1ColorMatrix). It might work to have your DXGI_FORMAT_R8_UNORM (or DXGI_FORMAT_A8_UNORM, any single-channel would do) as input (inputs to effects are ID2D1Image, and ID2D1Bitmap inherits from ID2D1Image). Then set the D2D1_COLORMATRIX_PROP_COLOR_MATRIX for copying the input channel to all output channels. Have not tried it, though.

Pablo H
  • 609
  • 4
  • 22
  • Hey, thanks for the answer. I need to be very memory efficient, so I don't want to spend additional channels and don't want to convert the single-channel image into an RGB(A) image with the same value across the color channels. I really need to be able to create single-channel bitmaps, otherwise Direct2D is out of the game... Or I need to mix D3D and D2D and use the former for the bitmaps?! – MisterX Apr 26 '18 at 09:25
  • @MisterX Your link to [supported pixel formats](https://msdn.microsoft.com/en-us/library/windows/desktop/dd756766(v=vs.85).aspx) seems to show that `DXGI_FORMAT_A8_UNORM` is your only option. I have too little experience with effects, so I cannot confirm, but perhaps the output of the effect is not materialized (ie, it is rendered directly to the render target), or it is only transiently so. I am not saying to keep the output, I am suggesting to chain the effect when rendering, only. I expect your render target is already RGB(A). You will need to check if this approach works for you... – Pablo H Apr 27 '18 at 18:15
  • @MisterX On the other hand, drawing a bitmap with Direct3D 11.x directly is easy once you get past the administrative trudge. If you do not need Direct2D main features (text, layers, simple rendering model without shaders), then by all means go the D3D route. Bonus for a medical app that you can work with greater-precision images, control transfer functions easily, use compute shaders, etc. – Pablo H Apr 27 '18 at 18:22