2

I am using uthash (http://uthash.sourceforge.net/) for hash table implementation in my CUDA C program.

I have a bunch of keys say allkeys[100]. What I would like to do is, perform a parallel hash table look up using those 100 keys on the hash table and return a result array called results[100]. Basically launch a grid with xdimension as 100 and each block with perform one hash table lookup and store it in the result array.

Therefore what I have tried so far is, cudMalloc the hashtable on the device memory (no. of entries in hash table X size of one struct defining one hash table entry with a handle) Then I cudaMemcpy the host hashtable to device hash table.

However, in my __device__ searchhashtable(int key) function I get a error saying

error calling host function memcmp __device__ __global__ function

I went through uthash.h implementation and can see that it uses the string.h library and in particulary fails at the memcmp function.

Whats the best way to handle this ?

talonmies
  • 70,661
  • 34
  • 192
  • 269
dparkar
  • 1,934
  • 2
  • 22
  • 52
  • You have no choice but to reimplement an equivalent function yourself. The "best" way depends completely on your application and data. – talonmies Jul 25 '12 at 15:57
  • A CPU based hash algorithm won't work well on the GPU even if you do get it to compile. It's hard to get good performance with hash tables on the GPU because a naive implementation will have each thread reading from a different location in memory, breaking all coalescing. You should see if you can do the hash lookups on the CPU and pass an array with the resolved values to the GPU. In the array, you can pass duplicated entries as necessary so that the memory accesses on the GPU stay coalesced. – Roger Dahl Jul 25 '12 at 18:08
  • in CUDA by Example, there is a GPU hash table implementation. I have attempted to try that out, however looks like I get different answers when using that implementation compared to using UTHASH. Wondering if this is a collisions issue. The GPU implementation looks to be very naive. – dparkar Aug 03 '12 at 20:31

0 Answers0