3

Is global memory write considered atomic or not in CUDA?

Considering the following CUDA kernel code:

int idx = blockIdx.x*blockDim.x+threadIdx.x;
int gidx = idx%1000;
globalStorage[gidx] = somefunction(idx);

Is the global memory write to globalStorage atomic?, e.g. there is no race conditions such that concurrent kernel threads write to the bytes of the same variable stored in globalStorage, which could mess the results up (e.g. parial writes)?

Note that I am not talking about atomic operations like add/sub/bit-wise etc here, just straight global write.

Edited: Rewrote the example code to avoid confusion.

user0002128
  • 2,785
  • 2
  • 23
  • 40
  • 1
    Please give your questions a bit more thought - it isn't much fun to write a perfectly valid answer and then discover that the question was later rewritten making that answer invalid. – talonmies Dec 21 '13 at 07:40
  • You don't seem to understand what atomic means. Atomic refers to *multiple* operations (canonical: Read-Modify-Write) that are executed in sequence and cannot be disturbed by other intervening operations. Referring to a single operation (e.g. a write) and asking if it is atomic, is therefore not sensible. In general, multiple writes to the same location in global memory, or combinations of reads and writes to the same location in global memory, will definitely give the possibility of race conditions, and unexpected results. – Robert Crovella Dec 21 '13 at 21:06
  • If you are asking about what order multiple writes will occur in, there is no guarantee of execution order in CUDA (even on an individual thread or warp basis) and therefore multiple writes to the same location will give undefined results. One of the writes will succeed, ie. will end up in the location after some period of time. But anything beyond that is undefined. In my opinion, your question is still unclear. – Robert Crovella Dec 21 '13 at 21:11

1 Answers1

2

Memory acesses in CUDA are not implicitly atomic. However, the code you originally showed isn't intrinsically a memory race as long as idx has a unique value for each thread in the running kernel.

So your original code:

int idx = blockIdx.x*blockDim.x+threadIdx.x;
globalStorage[idx] = somefunction(idx);

would be safe if the kernel launch uses a 1D grid and globalStorage is suitably sized, whereas your second version:

int idx = blockIdx.x*blockDim.x+threadIdx.x;
int gidx = idx%1000;
globalStorage[gidx] = somefunction(idx);

would not be because multiple thread could potentially write to the same entry in globalStorage. There is no atomic protections or serialisation mechanisms which would produce predictable results in such as case.

talonmies
  • 70,661
  • 34
  • 192
  • 269