3

I am implementing a certain image processing algorithm with CUDA and I have some questions about the thread synchronization issue overall.

The problem at hand can be explained like that:

We have an image with the size of W*H. For each pixel of the image I need to run 9 identical-data parallel processes and each process gives an array of values as the result (the arrays are of the same length for the whole algorithm, lets say N, which is around 20 or 30). For each pixel, these 9 processes will accumulate their results in a final array (a single array for each pixel) after they finish their calculations.

In order to parallelise this, I designed the following structure: I generate blocks with the dimensions of (10,10,9), this means each thread block will process a 10*10 sized sub image and each thread will process 1 of the 9 identical processes for a single pixel. The grid dimension will be (W/10,H/10,1) in this case. For a thread block, I will allocate a shared memory array with the length of 100*N and each thread will write to the appropriate shared memory location according to the coordinates of its current pixel. So, I need a synchronization with an atomicAdd and __synchthreads() here.

The problem here is, if a pixel has a value of zero, then we don't need to process it at all, so I want to exit for such pixels, otherwise I will do unnecessary work since a large portion of the image consists of zeroes (background). So, I thought of writing something like the following:

//X and Y are the coordinates of the current pixel in the input image.
//threadIdx.z gives the index of the process among the 9 for the current pixel. 

int X=blockIdx.x * blockDim.x + threadIdx.x;
int Y=blockIdx.y * blockDim.y + threadIdx.y;
int numOfProcessForTheCurrPixel=threadIdx.z;
int linearIndexOfPixelInBlock=threadIdx.y * blockDim.x + threadIdx.x;

unsigned short pixelValue=tex2D(image,X,Y);
//Here, threads processing zero-pixels will exit immediately.
if(pixelValue==0)
 return;

float resultArray[22];
//Fill the result array according to our algorithm, mostly irrelevant stuff.
ProcessPixel(resultArray,X,Y,numOfProcessForTheCurrPixel);

for(int i=0;i<22;i++)
    atomicAdd(&__sharedMemoryArray[22*linearIndexOfPixelInBlock + i],resultArray[i]);

 __syncthreads(); 
 //Then copy from the shared to the global memory and etc. 

What worries me in this situation is what the Programming Guide is saying:

__syncthreads() is allowed in conditional code but only if the conditional evaluates identically across the entire thread block, otherwise the code execution is likely to hang or produce unintended side effects.

So in my case, if some of the pixels in a 10*10 thread block are zero and some or not, then the threads belonging to the zero pixels will exit immediately at the beginning and the other threads will continue to their processing. What about the synchronization in this case, will it work still properly or will it generate undefined behavior like the Programming Guide says? I thought of making zero pixel threads processing garbage data to keep them busy but this will unnecesarilly increase the processing time if we have blocks which consist of zeroes entirely (and we have them very often). What can be done in this case?

talonmies
  • 70,661
  • 34
  • 192
  • 269
Ufuk Can Bicici
  • 3,589
  • 4
  • 28
  • 57
  • 2
    Your code is a recipe for deadlock. See the linked thread for a comprehensive answer. – talonmies Jul 25 '12 at 16:23
  • I see, so all threads in the warp must actually hit the barrier instruction. What can be done in my case then, how can I exit from blocks of zero pixels without doing unnecessary work in the same time avoiding deadlocks and synchronization issues? – Ufuk Can Bicici Jul 25 '12 at 16:36
  • And another question, in the linked thread, it says that the thread count which hit the barrier instruction is increased by the warp size, not by the number of active threads. So, probably the scheduler begin the process of a new warp from the current thread block until it hits the barrier, too. So, in this case how does the system check whether all the threads in the block hit the barrier, does it compare the arrival count with some value, like "(ceil((num. of threads in the block)/(warp_size))+1)*warp_size" or what? I don't see how this leads to a deadlock? – Ufuk Can Bicici Jul 25 '12 at 16:51
  • 2
    Keep in mind that multiple threads on the GPU don't execute independently of each other. All the threads in a warp execute the same instruction at the same time. In an if statement, if one thread takes the if clause and all other threads take the else clause, the one thread will execute the if clause while the other threads idle, then the else clause threads will execute while the one thread idles. At the end if the if statement, the threads are back in sync executing the same instructions. – dthorpe Jul 25 '12 at 17:17
  • I see how I should code the kernel right now, but I am still confused by the "Thus, if any thread in a warp executes a bar instruction, it is as if all the threads in the warp have executed the bar instruction." part of the guide quote. Lets assume that a thread processed else part of an if clause and the others took the if way and we have a barrier in the else part. So according to the quoted sentence, all threads in the warp are assumed to hit a barrier and increase the arrival count by the warp size, so all threads are treated as they are blocked. So, how can this cause a deadlock? – Ufuk Can Bicici Jul 26 '12 at 08:37
  • (Continuing from the previous comment) I asked how the system detects that the barrier has been completed because of that. If there is an arrival counter then this counter must become equal to the thread count in the block in order to show that all threads completed the barrier. If this counter were increased per thread-wise then it could create a deadlock since the number would never reach the total thread count in the case of an if clause. But since the counter is updated with the warp size, it acts as all the threads has reached the barrier. – Ufuk Can Bicici Jul 26 '12 at 08:59
  • Since your code is based on a pixel value rather than a thread index, you can't guarantee at least one thread in each warp will hit the `__syncthread`, so this code is indeed a recipie for deadlock as talonmie said. **However, if you can make that guarantee (e.g. using a thread index), then you can exit early.** Please see [my new answer](http://stackoverflow.com/a/30382467/2778484) to a different question in the same issue. – chappjc May 21 '15 at 19:19

1 Answers1

1

To avoid creating a deadlock, all threads need to hit the _synchthreads() unconditionally. You can do that in your example by replacing the return with an if statement that jumps over the bulk of the function and heads straight for the _syncthreads() for the zero pixel case.

unsigned short pixelValue=tex2D(image,X,Y);
//If there's nothing to compute, jump over all the computation stuff
if(pixelValue!=0)
{

    float resultArray[22];
    //Fill the result array according to our algorithm, mostly irrelevant stuff.
    ProcessPixel(resultArray,X,Y,numOfProcessForTheCurrPixel);

    for(int i=0;i<22;i++)
        atomicAdd(&__sharedMemoryArray[22*linearIndexOfPixelInBlock + i],resultArray[i]);

}

__syncthreads(); 

if (pixelValue != 0)
{
    //Then copy from the shared to the global memory and etc. 
}
dthorpe
  • 35,318
  • 5
  • 75
  • 119