0

I am a new Rapids learner. I installed a Rapids-23.02 framework with 6G GPU, and 32G RAM on ubantu . When I run a program that only uses rapids for acceleration, the GPU memories only used 3G (nvidia-smi), and RAM memory only used 7.5G.

I have tried the torch.mutiprossoer, But there always be memory overload and shut down. Is there a valuable reference case for pycuda multithreading.

So, what other approach can provide memory utilization to obtain twice or three times the speed (considering GPU memory).

I also want to know, whether API (the shortest path between the SOURCE and TARGET) has been published in curgraph 23.02 for python. sorry, I just can't find it in the document.

0 Answers0