3

I am working on an application for which it is necessary to run a CUDA kernel indefinitely. I have one CPU thread that writes stg on a list and gpu reads that list and resets (at least for start). When I write inside the kernel

while(true)
{
//kernel code
}

the system hangs up. I know that the GPU is still processing but nothing happens of course. And I am not sure that the reset at the list happens.

I have to mention that the GPU used for calculations is not used for display, so no watchdog problem.

The OS is Ubuntu 11.10 and cuda toolkit 4.1. I could use any help/examples/links on writing infinite kernel successfully.

amanda
  • 394
  • 1
  • 9
  • CUDA scheduler is really bad at handling infinite loops, spin-locks, etc, since such "objects" are totally alien for GPU architecture. Much more common and predictable way is to just run your kernel once in a while to check whether new elements have appeared. – aland May 03 '12 at 17:47
  • Also, new elements can't just appear. You have to put them there. So you know when it's necessary to rerun the kernel. – Roger Dahl May 03 '12 at 18:29
  • Power usage on a high end GPU can jump up by 250W when a kernel is running, so there's money to save by being selective about when to run the kernel. More environmentally friendly too. – Roger Dahl May 03 '12 at 20:44
  • thank you all for your comments but the infinite kernel is mandatory for the current project. the goal is a gpu controller so, the gpu has to work autonomously without cpu interference (except of course for the kernel call) – amanda May 04 '12 at 05:55
  • Could you please give more information about your project and what exactly you want to achieve by infinite kernel? – aland May 04 '12 at 06:24
  • gpu will be used as a device that starts from cpu and then runs serving indefinately (something like a controller). cpu writes at a memory region and gpu has to read that region, serve and answer. for now it does not have meaning the rest, I am just trying to write something from cpu and reset it from gpu – amanda May 04 '12 at 06:39
  • What are you actually asking here? I read your "question" three times and I don't actually see what it is you want to know. – talonmies May 04 '12 at 06:46
  • @talonmies:probably you didn't read carefully "I could use any help/examples/links on writing infinite kernel successfully." to be clear: I am asking help (or an example, or a reading suggestion) from someone who has developed or seen something similar. by similar i mean using efficiently gpu as a device that runs as standalone while application (or driver) runs – amanda May 04 '12 at 06:57
  • 4
    "the infinite kernel is mandatory for the current project. the goal is a gpu controller so, the gpu has to work autonomously without cpu interference (except of course for the kernel call)." Your entire idea sounds completely flawed IMO. You should go back and carefully rethink it. Take to heart what I said earlier: New elements can't just appear. You have to put them there. So you know when it's necessary to rerun the kernel. – Roger Dahl May 04 '12 at 14:30
  • 1
    For what seems to be your problem you want to run a complete process in the background or at least a thread, not just a CUDA kernel. – leftaroundabout May 04 '12 at 16:15
  • stg: some data in memory. the data are wriiten from a cpu thread. I know that what i am trying is not complying with the purposes of a gpu but there is the need to test some aspects of its behaviour. There is no confusion at all, all this mess is on purpose :) – amanda May 07 '12 at 10:17

2 Answers2

2

The CUDA programming language and the CUDA architecture do not currently support infinite kernels. I suggest you consider Roger's suggestion.

If you want to pursue this I suggest you add the following debug code to your kernel:

  1. Increment a variable in pinned memory every N clocks (may want a different location for each SM) and,
  2. Periodically read a memory location that can be updated by CPU to tell the kernel to exit.

This is a software watchdog.

You can use clock() or clock64() to control how often you do (1) and (2).

You can use cuda-gdb to debug your problem.

Infinite loops are not supported in the language. The compiler may be stripping code. You may want to review the PTX and SASS. If the compiler is generating bad code you can fake it out by making the compiler think there is a valid exit condition.

Greg Smith
  • 11,007
  • 2
  • 36
  • 37
  • it was a clever suggestion but it didn't work. It doesn't work even if I remove the while(true) and replace it with (for int i=0; i<1000; i++). There is nothing wrong with the code (it is really simple actually) and I executed the same code successfully at host.. I understand that cuda scheduler doesn't handles well loops but I have seen many kernel examples running inside a small while or a for loop. – amanda May 07 '12 at 18:36
  • 1
    If you are keeping the data in pinned system memory make sure you are doing a __theradfence_system to flush the writes to system memory. If you are reading a value make sure you mark it volatile so that the compiler is not using a previous read in a register. – Greg Smith May 07 '12 at 22:09
0

As already pointed out by @Greg Smith, CUDA compiler does not generate proper assembly for infinite loops. And of course there are certain situations when it's a perfect solution, e.g. running a background service kernel, which receives updates from host, pushed over host-mapped memory.

One workaround, which works as of CUDA 9.2:

volatile int infinity = 1;
while (infinity)
{
  ...
}

Doing infinite loop inside a divergent branch is obviously not a good idea. Other than that, improper handling of while (1) construct IMO is definitely a bug.

Dmitry Mikushin
  • 1,478
  • 15
  • 16