0

Is there a way to create NDArrays in DL4J so that the operations are done on GPU ?

For example in pytorch,

cuda0 = torch.device('cuda:0')
x = torch.tensor([1., 2.], device=cuda0)

I cannot find a way to specify a back-end (cpu or gpu) if my system is equipped with a GPU ?

talonmies
  • 70,661
  • 34
  • 192
  • 269

1 Answers1

0

all you have to do is include the right nd4j backend version in your pom.xml. In this case it's generally:

<dependency>
 <groupId>org.nd4j</groupId>
 <artifactId>nd4j-cuda-10.2</artifactId>
 <version>1.0.0-beta7</version>
</dependency>

This is the most recent dl4j version as of this writing. Make sure to double check this on maven central. This will include cuda 10.2

See more here: https://deeplearning4j.konduit.ai/config/backends

Adam Gibson
  • 3,055
  • 1
  • 10
  • 12
  • I need to pass ```INDArray.data().pointer()``` to my cpp backend. And also I need to create ```NDArray``` using ```libnd4j``` on gpu using ```NDArray``` constructor – Nitin Trivedi Jul 01 '20 at 05:32