0

I try to use graphics card to do some heavy computation(direct compute), and I hope it only enabled with dedicated graphics card, since a integrated graphics card is too weak for my program. I got the following info by using DXGI:

typedef struct DXGI_ADAPTER_DESC {
  WCHAR  Description[128];
  UINT   VendorId;
  UINT   DeviceId;
  UINT   SubSysId;
  UINT   Revision;
  SIZE_T DedicatedVideoMemory;
  SIZE_T DedicatedSystemMemory;
  SIZE_T SharedSystemMemory;
  LUID   AdapterLuid;
} DXGI_ADAPTER_DESC;

For graphics card of NVidia and Intel, I can use VendorID to do that. But how can i know it is a dedicated graphics card or not for an AMD graphics card, since AMD produce both dGPU and iGPU.

  • You should rather query DirectCompute capabilities of the device to decide how powerful it is. – Roman R. Dec 12 '19 at 13:02
  • @ Roman R. Thanks.Actually I did that, but some iGPUs with Direct Compute Capability 5.0 are still too weak for my program. I am looking for a method to distinguish iGPU and dGPU, and limiting the shader num is an alternative. But I don't know if there is a better choice or not. – Iverson Sun Dec 13 '19 at 02:05
  • You could use `EnumAdapterByGpuPreference` from `DXGIFactory6` to get the most hight-performance available adapter. It doesn't distinguish between AMD cards, but at least you'll know that the first adapter you get is the best available adapter. – rashmatash Dec 19 '19 at 13:57
  • @rashmatash Minimum supported client is Windows 10, considering my users(some of them use Win8 or Win7), maybe I need to find a compromise, like limiting shader num or something else. Really appreciating your idea! – Iverson Sun Dec 23 '19 at 08:42

0 Answers0