1

Is it possible to split work up between multiple devices (GPUs), like you can with CUDA? How does this look in code?

Hard to find proper documentation for DirectCompute, and the SDK doesn't show any examples of this.

talonmies
  • 70,661
  • 34
  • 192
  • 269
user1043761
  • 696
  • 6
  • 22
  • Sorry don't know how to do it in DirectCompute (hence not answering the question), but I do know it is possible. The C++ AMP technology that builds on DirectCompute makes it very easy as described here: http://blogs.msdn.com/b/nativeconcurrency/archive/2012/03/07/using-multiple-accelerators-from-c-amp.aspx I also wonder why you are not using C++ AMP directly and instead usign lower level HLSL? – Daniel Moth Jun 07 '12 at 01:11

0 Answers0