1

I wrote the following simple CUDA kernel:

__global__ void pr_kernel(float* O, const float* I, const float* W, int N)
{
  int x = threadIdx.x;
  float sum;
  int i;
  if (x < N) {
    for (i = 0; i < N; i++) {
      if (i == x) continue;
      sum += W[x*N+i] * I[x];
    }
    O[x] = (0.15 / N) + 0.85 * sum;
  }
}

The variables are allocated in Python as follows:

N      = np.int32(4)
W      = np.float32(np.asarray(
         [0, 1, 0, 1, 1, 0, 1, 1, 
         0, 1, 0, 1,1, 1, 0]))
I      = np.float32(np.asarray(
         [0.25, 0.25, 0.25, 0.25]))
O      = np.float32(np.zeros(N))

I'm transferring the variables using gpuarray.to_gpu, and I'm calling the kernel on a Tesla C2070 with the following line:

pr_kernel(O_d, I_d, W_d, N_d, block=blocksize, grid=gridsize)

Where:

blocksize = (128, 1, 1)
gridsize = (1, 1)

I get the error message:

pycuda.driver.LaunchError: cuLaunchKernel failed: launch out of resources.

This happens even if I reduce blocksize to something like (8, 1, 1). I can run other CUDA programs on the GPU with a blocksize of (512, 1, 1) so I'm confident this is not due to a GPU configuration issue.

What am I doing wrong? Thanks for any help.

user2398029
  • 6,699
  • 8
  • 48
  • 80
  • this can't be your actual kernel. Where is tid defined? Where is (little) i defined? Why not just cut and paste in your __actual__ kernel? – Robert Crovella Nov 04 '12 at 23:00
  • Sorry, actual kernel is on a VirtualBox and I posted a slightly outdated version from my local machine since I can't copy paste. – user2398029 Nov 04 '12 at 23:22
  • Is saxpy_kernel the same as pr_kernel? – dreamcrash Nov 04 '12 at 23:26
  • Yes sorry again, same problem as above. Some starter code that I modified. – user2398029 Nov 04 '12 at 23:51
  • I don't think it explains your problem, but you may want to initialize sum to some known value before adding to it. The error message you're getting may be due to your actual launch configuration (e.g. number of parameters, or type of paraemters) as discussed [here](http://stackoverflow.com/questions/6892280/how-do-i-diagnose-a-cuda-launch-failure-due-to-being-out-of-resources). Also this [one](http://stackoverflow.com/questions/13186200/why-is-my-rather-trivial-cuda-program-erring-with-certain-arguments) shows a mistake that can be made in parameter definition for cuda kernels in pycuda. – Robert Crovella Nov 05 '12 at 01:05

2 Answers2

1

The problem was that I was transferring the integer N to the GPU using gpuarray.to_gpu, where I should have been directly passing N to the pr_kernel function.

user2398029
  • 6,699
  • 8
  • 48
  • 80
0

I got a similar problem when I used a different type in definition and as an argument to the kernel. Probably the fact that the latter required more resources generates an error.

kon psych
  • 626
  • 1
  • 11
  • 26