0

I have a large number of operations X to perform on a large amount of items Y. Each operation X is fairly trivial and is essentially just evaluating a bunch of AND and OR logic.

Each Func(X, Y) is naturally very quick however the sheer combination of X and Y makes the entire operation take a long time.

PLinq makes it much faster however that is still relatively slow.

I have spent several days researching various frameworks (Alea, Cudafy, GPULinq) to get this working on the GPU however I am finding that the GPU is not good for all operations.

The main problem is that in the GPU Kernel at some points is performing the intersection or the union of an integer array. This results in an unknown amount of values. Possibly 2*Length in union or 0 in intersection.

I could get around this by always using 2*Length however Length itself is not a constant either.

How can I return a variable sized int array in any GPU framework?

Telavian
  • 3,752
  • 6
  • 36
  • 60

1 Answers1

0

isn't it just a case of using the syntax:

double[] x = gpu.allocate(size of array based upon a variable or numerical value);

and then returning it from the [Cudafy] method.

Jack
  • 102
  • 1
  • 14
  • Can't do that in the code that is running on the GPU. Would have to know the size ahead of time, allocate it, and then pass it to the GPU. – Telavian Oct 19 '16 at 20:36