4

I want to learn how to do GPU programming over the summer, and I'm open to all languages/libraries but most interested in PyCuda.

I am not a strong programmer; I can bang out most programs I want in Java, and understand the rudiments of C, but when I try anything complex in the latter a segfault or malloc error is almost certain.

Thus, I really need a "for dummies" tutorial/guide/documentation. Ideally, a guide would work from the very basics of GPU programming through to fairly complicated scientific/numerical programming while explaining each detail with clarity and depth that doesn't take for granted any prior knowledge.

Elliot JJ
  • 543
  • 6
  • 19
  • You do realize that in PyCUDA, the GPU itself is still programmed in a "CUDA C" (a C/C++ hybrid)? PyCUDA provides a nice pythonic abstraction of the host APIs and great numpy interoperability, but you still have to have the "C chops" to program the code that runs in the GPU. – talonmies May 20 '11 at 06:08
  • I understand this. I figured that PyCuda might happen to have better tutorials; I also plan to learn it sooner or later for the reasons you mentioned, so I figure I might learn it simultaneously with CUDA. If you know of good tutorials that deal solely with CUDA, that's great too! As far as "C chops" - I won't have those no matter what interface/API I use; I'll have to develop them - "trial by fire" – Elliot JJ May 20 '11 at 06:10
  • this free course by udacity is a great resource: https://classroom.udacity.com/courses/cs344/ – Vadim Smolyakov Aug 23 '17 at 11:31

1 Answers1

3

Starting with PyCUDA doesn't eliminate the need to understand how CUDA works and how to program the GPU. Realistically you probably need to do all of the following, and in this order:

  1. Learn enough C to a least have a grasp of the syntax and a thorough understanding of pointers and memory concepts. The latter is really important because in CUDA you are always working with a non-uniform address space. There are dragons a plenty if you can't understand why pointers aren't portable and indirection of pointers in the wrong memory space can't work.
  2. Work through something like CUDA by example to get the hang of the basic ideas behind CUDA programming and how the APIs work.
  3. Do whatever "Python for dummies" and "numpy for dummies" tutorials you need to get up to speed with the Python end of things.

Then PyCUDA will become completely self evident. It took me about an hour to digest PyCUDA coming from a background of already knowing how to write working CUDA code and working a lot with Python and numpy.

talonmies
  • 70,661
  • 34
  • 192
  • 269
  • Thanks! I guess I need to really focus on the C end of things, that will be the bottleneck and not python. By the way, is CUDA designed so that it can't screw up the GPU, no matter how incompetently a program is written? I absolutely cannot afford to harm my GPU. – Elliot JJ May 20 '11 at 06:51
  • You can't *programmatically* do anything harmful to the GPU with CUDA, at worst on older cards you can hose your desktop display ram and need to reboot. But CUDA workloads can be about as hard on the hardware as full on 3d gaming, so if you have poor cooling or a dodgy power supply, then the same caveats apply as would to heavy duty 3d gaming. – talonmies May 20 '11 at 07:00