Suppose I have a PyTorch Variable in GPU:
var = Variable(torch.rand((100,100,100))).cuda()
What's the best way to copy (not bridge) this variable to a NumPy array?
var.clone().data.cpu().numpy()
or
var.data.cpu().numpy().copy()
By running a quick benchmark, .clone()
was slightly faster than .copy()
. However, .clone()
+ .numpy()
will create a PyTorch Variable plus a NumPy bridge, while .copy()
will create a NumPy bridge + a NumPy array.