0

I am using reduction code basically exactly like the examples in the docs. The code below should return 2^3 + 2^3 = 16, but it instead returns 9. What did I do wrong?

import numpy
import pycuda.reduction as reduct
import pycuda.gpuarray as gpuarray
import pycuda.autoinit
from pycuda.compiler import SourceModule as module

newzeros = [{1,2,3},{4,5,6}]
gpuSum = reduct.ReductionKernel(numpy.uint64, neutral="0", reduce_expr="a+b", map_expr="1 << x[i]", arguments="int* x")
mylengths = pycuda.gpuarray.to_gpu(numpy.array(map(len,newzeros),dtype = "uint64",))
sumfalse = gpuSum(mylengths).get()
print sumfalse
Elliot Gorokhovsky
  • 3,610
  • 2
  • 31
  • 56

1 Answers1

1

I just figured it out. The argument list used when defining the kernel should be unsigned long *x, not int *x. I was using 64-bit integers everywhere else and it messed it up.

Elliot Gorokhovsky
  • 3,610
  • 2
  • 31
  • 56