0

I am trying to identify any potential options to accelerate linear algebra calculations by using the GPU. To be more precise I need to accelerate an explicit dynamics solver. Since in each increment it solves a linear system I thought maybe I could accelerate it by using the GPU.

Currently I have a C# code doing that (CPU wise). But I am willing to use any language (C++, Python) if needed. As I am a complete newbie on this I google searched and concluded that probably best bets would be with OneAPI and ROCm. The problem is that so far for both OneAPI and ROCm I have not figured out a way for them to run natively on a Windows environment with AMD GPU. Am I missing something here?

Any help would be much appreciated.

Greydas
  • 1
  • 2
  • If you are a complete newbie and have written your own solver, it is likely horribly inefficient, and some simple optimizing will likely give good improvements. Start by doing some profiling to see where the bottlenecks are. While GPUs could possibly be used, it will increase the complexity of the solution *massibly*. So you need to consider benefit/cost. Or just use a library that is already well optimized. – JonasH Apr 13 '23 at 08:50
  • My impression of the GPGPU space is that it is still a complete mess of vendor specific APIs and compatibility issues. From the applications I use, CUDA seem to be most popular, probably since it should be fairly mature by now. – JonasH Apr 13 '23 at 09:05
  • 1
    Relevant (published 13 hours ago): https://www.tomshardware.com/news/amd-rocm-comes-to-windows-on-consumer-gpus – virchau13 Apr 14 '23 at 11:55

0 Answers0