16

I only found a remark that local memory is slower than register memory, the two-per-thread types.

Shared memory is supposed to be fast, but is it faster than local memory [of the thread]?

What I want to do is kind of a median filter, but with a given percentile instead of the median. Thus I need to take chunks of the list, sort them, and then pick a suitable one. But I can't start sorting the shared memory list or things go wrong. Will I lose a lot of performance by just copying to local memory?

TylerH
  • 20,799
  • 66
  • 75
  • 101
JohnKay
  • 233
  • 2
  • 7
  • This is not really programming related, is it? I don't see a strong link to the Mathematica tag either. – Sjoerd C. de Vries Aug 30 '11 at 10:29
  • 9
    @Sjoerd C. de Vries: in the context of CUDA, it is a programming related question - the architecture has a non uniform memory space and the programmer must explicitly choose which memory types and accessing methods should be used in any code he or she writes. It is a basic tenet of CUDA programming. – talonmies Aug 30 '11 at 10:35
  • @talonmies I understand that, but still this question is not about programmatically selecting memory, differences wrt API's, programming registers vs programming shared memory etc. It's basically about which memory type is faster. That's a hardware question. I feel the OP should rephrase the question, for instance in the direction of his problem of finding a certain percentile of the data using shared memory in CUDA. – Sjoerd C. de Vries Aug 30 '11 at 11:56
  • Well, I am doing this in mathematica through its excellent cudalink feature that lets you write cuda straight into matehmatica, and it is a task which is very common for using mathematica for, but sure. – JohnKay Sep 02 '11 at 05:51

1 Answers1

25

Local memory is just thread local global memory. It is much, much slower (both in terms of bandwidth and latency) than either registers or shared memory. It also consumes memory controller bandwidth that would otherwise be available for global memory transactions. The performance impact of spilling or deliberately using local memory can be anything from minor to severe, depending on the hardware you are using and how local memory is used.

According to Vasily Volkov's research - see Better performance at lower occupancy (pdf) -- there is about a factor of 8 difference in effective bandwidth between shared memory and register on Fermi GPUs (about 1000 Gb/s for shared memory and 8000 Gb/s for registers). This somewhat contradicts the CUDA documentation, which implies that shared memory is comparable in speed to registers.

talonmies
  • 70,661
  • 34
  • 192
  • 269
  • Yes, thank you talonmies, to further elaborate with my experimental findings supporting this info: Working in local memory with my problem was indeed orders of magnitude slower. Since my program already works on the limits of my hardware shared memory size per block I could not use shared memory for recalculations, so I had to use some not so smart register memory algorithms to look for my percentile, this turned out to be pretty fast anyway. – JohnKay Sep 02 '11 at 05:49