My goal is to write a custom reduction kernel that returns both the argmax along each row as well as the difference between the max and submax (second-largest max). I am new to CUDA and I am working with cupy. As a first step, I tried to write my own max(axis=1)
kernel. Sometimes it works, but for large matrices it will crash.
import cupy as cp
import numpy as np
maxval2d = cp.RawKernel(r'''
extern "C" __global__
#define THREADS_PER_BLOCK (32*32)
void my_maxval2d(unsigned int cols, int* src, int* dst) {
__shared__ int block_data[THREADS_PER_BLOCK];
unsigned int row = blockDim.y * blockIdx.y + threadIdx.y;
unsigned int col = blockDim.x * blockIdx.x + threadIdx.x;
unsigned int threadId = threadIdx.y * blockDim.x + threadIdx.x;
unsigned int i = row * cols + col;
block_data[threadId] = src[i];
__syncthreads();
// do reduction in shared mem
for(unsigned int stride = blockDim.x/2; stride > 0; stride >>= 1) {
if (threadIdx.x < stride) {
int& a = block_data[threadId];
const int& b = block_data[threadId + stride];
if(b > a) {
a = b;
}
}
__syncthreads();
}
// write result for this block to global memory
if (threadIdx.x == 0) {
unsigned int left_col = row * cols + blockIdx.x;
dst[left_col] = block_data[blockDim.x * threadIdx.y];
}
}
''', 'my_maxval2d')
cols = 32*32
rows = 32
cp.random.seed(1)
src = cp.random.random((rows, cols))
src = (src*900 + 100).astype(cp.int32) # make integers from 100-999
dst = cp.zeros((rows, cols))
dst = dst.astype(cp.int32)
print('baseline:', src.max(axis=1)[0])
threads = 32
remaining = cols
counter = 0
while remaining > 1:
block_dim = (remaining//threads, rows)
thread_dim = (threads, rows)
print(f'loop {counter}, remaining: {remaining}, block_dim: {block_dim}, thread_dim: {thread_dim}')
maxval2d(block_dim, thread_dim, (cols, src, dst))
remaining //= threads
src, dst = dst, src
counter += 1
print('custom:', dst[0,0])
The basic outline of the kernel was taken from the CUDA Webinar slides. I know that this code may have incorrect results for non-power-of-32 matrices, but for my (32, 1024) matrix I expect the results:
baseline: 996
loop 0, remaining: 1024, block_dim: (32, 32), thread_dim: (32, 32)
loop 1, remaining: 32, block_dim: (1, 32), thread_dim: (32, 32)
custom: 996
And indeed, when I set cols = 32
and print(dst[0,0])
, instead I get:
baseline: 994
loop 0, remaining: 32, block_dim: (1, 32), thread_dim: (32, 32)
custom: 994
But with a (32, 1024) matrix I get:
---------------------------------------------------------------------------
CUDARuntimeError Traceback (most recent call last)
<ipython-input-17-858a0ab67cd5> in <module>()
58 src, dst = dst, src
59 counter += 1
---> 60 print('custom:', src[0,0])
cupy/core/core.pyx in cupy.core.core.ndarray.__str__()
cupy/core/core.pyx in cupy.core.core.ndarray.get()
cupy/cuda/memory.pyx in cupy.cuda.memory.MemoryPointer.copy_to_host()
cupy/cuda/runtime.pyx in cupy.cuda.runtime.memcpy()
cupy/cuda/runtime.pyx in cupy.cuda.runtime.check_status()
CUDARuntimeError: cudaErrorIllegalAddress: an illegal memory access was encountered
My intuition says that somewhere in the kernel it is stepping outside of bounds. But I can't understand where that might be. How can I fix this code to get the expected results?