I am interested in using F# for numerical computation. How can I access the GPU using NVIDIA's CUDA standart under F#?
6 Answers
I agree with jasper that the easiest option currently is to use Accelerator from Microsoft Research. I wrote a series of articles about using from F#. A simple and direct introduction, Game of Life example, more advanced example using quotations and an example of using advanced quotation features. Satnam Singh's blog is also a great resource with some F# demos.
One problem with current graphics cards is that they do not support integers (as a result, Accelerator supports them only when running using optimized x64 parallel engine). Also, current graphics cards don't imeplement floating point numbers according to the IEEE standards - they are trying to be faster by doing a bit of "guessing", which doesn't matter when calculating triangle position, but could be an issue if you're dealing with financial calculations. (Accelerator can use various targets, so you're safe if you're using x64 parallel engine).
As far as I know, DirectCompute will require a precise implementation of floating point arithmetics as well as direct support for integers, so that may be a good choice in the future (or if Accelerator eventually starts using DirectCompute as their engine).

- 21,988
- 13
- 81
- 109

- 240,744
- 19
- 378
- 553
-
Are you sure the floating point issue still persists on the new NVDIA Fermi generation (from GTX 460 on)? They claimed having introduced improved support of double precision arithmetics on Fermi. – Martin Oct 28 '10 at 20:56
-
@Martin: I'm not sure about the latest generation of GPUs. Perhaps they already fixed this - it would be useful to have some clear guarantees. – Tomas Petricek Oct 28 '10 at 21:00
-
4I can assure you that all Dx10 and Dx11 generation support 32 bits integer precision. Latest Dx11 gen (both AMD/ATI and NVIDIA suppport IEEE 754-2008) that means fused-multiply-add and all the fancy stuff. – elmattic Oct 28 '10 at 23:23
-
1-1 For information that is just wrong. The latest GPUs are fully IEEE compliant AND they support integers. Who votes this crap up? – Eric Nov 01 '10 at 08:50
-
Also, Accelerator STILL just talks about DX9 GPUs. DX10 and DX11 GPUs offer a lot more and a lot better options for GPU programming. – Eric Nov 01 '10 at 08:54
Probably only hardcore GPU geeks like me have heard about it. Tidepowerd -- dead link has made GPGPU possible for CIL-based languages (such as F#, C#, VB.NET, whatever). On the other hand you can do the same for sole F# language with a Quotation-to-GPU runtime/API (looking forward to see someone implement that). This is something Agent Smith has bloged about or that is also mentioned in F# expert 1.0 book (Language Oriented Programming chapter) AFAIK.
Agent Smith (ok, sorry for that) is speaking about NVIDIA Cg. But you can do same using HLSL DirectCompute shaders or OpenCL C99.. PTX (low level NVIDIA IL), CAL-IL (low level AMD/ATI IL)...

- 21,988
- 13
- 81
- 109

- 12,046
- 5
- 43
- 79
As an alternative, you could consider using DirectCompute. The three big GPU APIs: CUDA, OpenCL and DirectCompute, are all very similiar. DirectCompute can easily be accessed from F# via SlimDX, a .NET wrapper for DirectX.

- 660
- 1
- 6
- 13
-
-1 DirectCompute is the newest, and least-well documented of the three GPU APIs. It's not really recommendable right now. – Eric Oct 28 '10 at 13:45
Accelerator from MS allows you to leverage the GPUs, so can do something like this, though you cant use CUDA.

- 3,424
- 1
- 25
- 46
You might look into CUDA.NET. It would let you use CUDA straight from F#. It can be found here: http://www.hoopoe-cloud.com/Solutions/CUDA.NET/Default.aspx
The other usual alternative for using CUDA from managed code is to encapsulate the CUDA functionality in a native DLL and then either P/Invoke that or write a C++/CLI wrapper around that, which you then use from e.g. your F# program.

- 6,364
- 1
- 32
- 49
For the sake of documentation (it is an old question with answers that do not cover the current technology landscape), if you had to write GPU/CUDA apps today, then another option to consider is aleagpu.

- 1,821
- 3
- 24
- 34