3

My computer has a GeForce GTX 960M which is claimed by NVIDIA to have 640 CUDA cores. However, when I run clGetDeviceInfo to find out the number of computing units in my computer, it prints out 5 (see the figure below). It sounds like CUDA cores are somewhat different from what OpenCL considers as computing units? Or maybe a group of CUDA cores form an OpenCL computing unit? Can you explain this to me?

enter image description here

mfaieghi
  • 570
  • 2
  • 9
  • 24
  • 3
    There's a table that maps OpenCL <-> CUDA linguo floating around the internet. What OpenCL calls a Compute Unit is CUDA's Streaming Multiprocessor. CUDA "cores" are essentially ALUs/FPUs. GTX 960M has 5 SMs with 128 cores each, that's 640 in total. – user703016 Dec 14 '15 at 04:07
  • @Angy Lettuce Thanks for the answer. So, if I understand this correctly, every work-group will be executed in a computing unit; therefore, given max work-group size 1024 the best parallelism that I can do in this GPU is to execute 1024*5=5120 work-items at the same. Is this right? – mfaieghi Dec 14 '15 at 19:49

2 Answers2

6

What is the relationship between NVIDIA GPUs' CUDA cores and OpenCL computing units?

Your GTX 960M is a Maxwell device with 5 Streaming Multiprocessors, each with 128 CUDA cores, for a total of 640 CUDA cores.

The NVIDIA Streaming Multiprocessor is equivalent to an OpenCL Compute Unit. The previously linked answer will also give you some useful information that may help with your kernel sizing question in the comments.

Community
  • 1
  • 1
Robert Crovella
  • 143,785
  • 11
  • 213
  • 257
0

The CUDA architecture is a close match to the OpenCL architecture.

A CUDA device is built around a scalable array of multithreaded Streaming Multiprocessors (SMs). A multiprocessor corresponds to an OpenCL compute unit.

A multiprocessor executes a CUDA thread for each OpenCL work-item and a thread block for each OpenCL work-group. A kernel is executed over an OpenCLNDRange by a grid of thread blocks. As illustrated in Figure 2-1, each of the thread blocks that execute a kernel is therefore uniquely identified by its work-group ID, and each thread by its global ID or by a combination of its local ID and work-group ID.

Copied from OpenCL Programming Guide for the CUDA Architecture http://www.nvidia.com/content/cudazone/download/OpenCL/NVIDIA_OpenCL_ProgrammingGuide.pdf

Community
  • 1
  • 1
Md Monjur Ul Hasan
  • 1,705
  • 1
  • 13
  • 36