I would like to know a method to generate Cartesian product using CUDA on GPU.
Simple case:
We have two lists:
A = {0.0, 0.1, 0.2} B = {0.0, 0.1, 0.2}
A x B = C = { {0.0, 0.0}, {0.0, 0.1}, {0.0, 0.2}, {0.1, 0.0}, {0.1, 0.1} ...}
How can I generate (list of list) C in GPU? How can this be done for N lists with M values each.
The terminology that I am using might be incorrect. I can try explaining what I mean:
I am essentially trying to generate a truth table: a binary truth table would look like
A binary truth table would look like
A B
0 0
0 1
1 0
1 1
where A has two values {0, 1} and B has {0, 1}. In my case A and B has more than two values, for starters 31 values (0 - 30). For every value in set A, I have 31 values in set B, I need to enumerate them and store them in memory.
Other than that, i need to extend the algorithm to N list instead of 2 lists (A and B)