12

Recently I've been doing string comparing jobs on CUDA, and i wonder how can a __global__ function return a value when it finds the exact string that I'm looking for.

I mean, i need the __global__ function which contains a great amount of threads to find a certain string among a big big string-pool simultaneously, and i hope that once the exact string is caught, the __global__ function can stop all the threads and return back to the main function, and tells me "he did it"!

I'm using CUDA C. How can I possibly achieve this?

talonmies
  • 70,661
  • 34
  • 192
  • 269
Kai Cui
  • 123
  • 1
  • 1
  • 5
  • here's one sulution that i received, but i still want the global function can respond as soon as it got the right string... QUOTE You can use a hierarchical shared memory flag within CTAs and a global memory flag to communicate across all CTAs and both of these needs to be volatile. All threads/CTAs periodically check these flags to see whether to continue searching (the one that finds the string updates it). QUOTE – Kai Cui Sep 20 '12 at 03:38

3 Answers3

25

There is no way in CUDA (or on NVIDIA GPUs) for one thread to interrupt execution of all running threads. You can't have immediate exit of the kernel as soon as a result is found, it's just not possible today.

But you can have all threads exit as soon as possible after one thread finds a result. Here's a model of how you would do that.

__global___ void kernel(volatile bool *found, ...) 
{
    while (!(*found) && workLeftToDo()) {

       bool iFoundIt = do_some_work(...); // see notes below

       if (iFoundIt) *found = true;
    }
}

Some notes on this.

  1. Note the use of volatile. This is important.
  2. Make sure you initialize found—which must be a device pointer—to false before launching the kernel!
  3. Threads will not exit instantly when another thread updates found. They will exit only the next time they return to the top of the while loop.
  4. How you implement do_some_work matters. If it is too much work (or too variable), then the delay to exit after a result is found will be long (or variable). If it is too little work, then your threads will be spending most of their time checking found rather than doing useful work.
  5. do_some_work is also responsible for allocating tasks (i.e. computing/incrementing indices), and how you do that is problem specific.
  6. If the number of blocks you launch is much larger than the maximum occupancy of the kernel on the present GPU, and a match is not found in the first running "wave" of thread blocks, then this kernel (and the one below) can deadlock. If a match is found in the first wave, then later blocks will only run after found == true, which means they will launch, then exit immediately. The solution is to launch only as many blocks as can be resident simultaneously (aka "maximal launch"), and update your task allocation accordingly.
  7. If the number of tasks is relatively small, you can replace the while with an if and run just enough threads to cover the number of tasks. Then there is no chance for deadlock (but the first part of the previous point applies).
  8. workLeftToDo() is problem-specific, but it would return false when there is no work left to do, so that we don't deadlock in the case that no match is found.

Now, the above may result in excessive partition camping (all threads banging on the same memory), especially on older architectures without L1 cache. So you might want to write a slightly more complicated version, using a shared status per block.

__global___ void kernel(volatile bool *found, ...) 
{
    volatile __shared__ bool someoneFoundIt;

    // initialize shared status
    if (threadIdx.x == 0) someoneFoundIt = *found;
    __syncthreads();

    while(!someoneFoundIt && workLeftToDo()) {

       bool iFoundIt = do_some_work(...); 

       // if I found it, tell everyone they can exit
       if (iFoundIt) { someoneFoundIt = true; *found = true; }

       // if someone in another block found it, tell 
       // everyone in my block they can exit
       if (threadIdx.x == 0 && *found) someoneFoundIt = true;

       __syncthreads();
    }
}

This way, one thread per block polls the global variable, and only threads that find a match ever write to it, so global memory traffic is minimized.

Aside: __global__ functions are void because it's difficult to define how to return values from 1000s of threads into a single CPU thread. It is trivial for the user to contrive a return array in device or zero-copy memory which suits his purpose, but difficult to make a generic mechanism.

Disclaimer: Code written in browser, untested, unverified.

harrism
  • 26,505
  • 2
  • 57
  • 88
  • 4
    Credit to Cliff Woolley, Paulius Micikevicius, and Stephen Jones (NVIDIA) for contributing to this answer. – harrism Sep 20 '12 at 04:40
  • 2
    This is the best way to do this, but be aware here is a potential deadlock in both of those codes if they are run with more blocks than can be resident on a GPU at once. The implicit assumption is that either a running block or an already run block will find the match and set the flag for other blocks to see. But if the work division is such that the block which will find the match doesn't get to run in the first GPU "fill" of concurrent blocks, the running blocks will never terminate, the the kernel will deadlock. – talonmies Sep 20 '12 at 05:46
  • I guess the same applies for the case where no match is found as well. It is implicit in those two kernels that a match will always be found. – talonmies Sep 20 '12 at 06:23
  • Yeah, although do_some_work() could return true if there's no more work. The calling code might be confused, but then again something other than "found" has to be stored, and if nothing is found, it won't be... Again, I edited to account for this case. Thanks! – harrism Sep 20 '12 at 06:34
  • wow, that really helpful and thank both of you. But as you said the **someoneFoundIt** may be the bottleneck because 1000s of blocks need to read its value, so is there any kind of memory other than **__shared__**, maybe like those texture registers, can speed up the memory access? – Kai Cui Sep 21 '12 at 09:08
  • `__shared__` is pretty fast, and when all threads read the same address in shared mem as in this case, the value is broadcast within each warp, so as long as your do_some_work is a sufficient amount of work, that should not be a bottleneck. – harrism Sep 23 '12 at 01:55
5

If you feel adventurous, an alternative approach to stopping kernel execution would be to just execute

// (write result to memory here)
__threadfence();
asm("trap;");

if an answer is found.

This doesn't require polling memory, but is inferior to the solution that Mark Harris suggested in that it makes the kernel exit with an error condition. This may mask actual errors (so be sure to write out your results in a way that clearly allows to tell a successful execution from an error), and it may cause other hiccups or decrease overall performance as the driver treats this as an exception.

If you look for a safe and simple solution, go with Mark Harris' suggestion instead.

tera
  • 7,080
  • 1
  • 21
  • 32
  • A downside of this is the error you get from the kernel is asynchronous, so you will have to synchronize the device or stream to catch it precisely. See [this answer](http://stackoverflow.com/questions/12521721/crashing-a-kernel-gracefully/12523539#12523539). – harrism Sep 21 '12 at 02:40
  • thanks for your advice. Actually my code is to implement a exhaustive search to match a certain string, and see how fast can my GTX 560 reach. I'll try both of your solution, but as i googled the function __threadfence(), it tells that __threadfence() can only make the flag variable visible to all the blocks of threads, how does it work to cause an exception as you said? – Kai Cui Sep 21 '12 at 08:43
  • 1
    The `__threadfence()` is indeed just there to make sure the results have safely reached memory before the `trap` is executed. My use of the word 'exception' may have been a bit unfortunate as this does not cause an exception in the C++ sense. I just wanted to emphasize that this spoils the normal smooth flow of queued kernels and may cause the driver to do additional work reinitializing the device. – tera Sep 21 '12 at 09:24
0

The global function doesn't really contain a great amount of threads like you think it does. It is simply a kernel, function that runs on device, that is called by passing paramaters that specify the thread model. The model that CUDA employs is a 2D grid model and then a 3D thread model inside of each block on the grid.

With the type of problem you have it is not really necessary to use anything besides a 1D grid with 1D of threads on in each block because the string pool doesn't really make sense to split into 2D like other problems (e.g. matrix multiplication)

I'll walk through a simple example of say 100 strings in the string pool and you want them all to be checked in a parallelized fashion instead of sequentially.

//main
//Should cudamalloc and cudacopy to device up before this code
dim3 dimGrid(10, 1); // 1D grid with 10 blocks
dim3 dimBlocks(10, 1); //1D Blocks with 10 threads 
fun<<<dimGrid, dimBlocks>>>(, Height)
//cudaMemCpy answerIdx back to integer on host

//kernel (Not positive on these types as my CUDA is very rusty
__global__ void fun(char *strings[], char *stringToMatch, int *answerIdx)
{
    int idx = blockIdx.x * 10 + threadIdx.x;

    //Obviously use whatever function you've been using for string comparison
    //I'm just using == for example's sake
    if(strings[idx] == stringToMatch)
    { 
       *answerIdx = idx
    }
} 

This is obviously not the most efficient and is most likely not the exact way to pass paramaters and work with memory w/ CUDA, but I hope it gets the point across of splitting the workload and that the 'global' functions get executed on many different cores so you can't really tell them all to stop. There may be a way I'm not familiar with, but the speed up you will get by just dividing the workload onto the device (in a sensible fashion of course) will already give you tremendous performance improvements. To get a sense of the thread model I highly recommend reading up on the documents on Nvidia's site for CUDA. They will help tremendously and teach you the best way to set up the grid and blocks for optimal performance.

xshoppyx
  • 1,444
  • 1
  • 8
  • 9
  • thanks for your advice. Actually my code is to implement a exhaustive search to match a certain string, and see how fast can my GTX 560 reach. so as @harrism said, it's necessary to use a **volatile** variable. – Kai Cui Sep 21 '12 at 09:18