I'm trying to have an in-depth understanding of how torch.from_numpy()
works.
import numpy as np
import torch
arr = np.zeros((3, 3), dtype=np.float32)
t = torch.from_numpy(arr)
print("arr: {0}\nt: {1}\n".format(arr, t))
arr[0,0]=1
print("arr: {0}\nt: {1}\n".format(arr, t))
print("id(arr): {0}\nid(t): {1}".format(id(arr), id(t)))
The output looks like this:
arr: [[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]]
t: tensor([[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]])
arr: [[1. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]]
t: tensor([[1., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]])
id(arr): 2360964353040
id(t): 2360964352984
This is part of the doc from torch.from_numpy()
:
from_numpy(ndarray) -> Tensor
Creates a :class:
Tensor
from a :class:numpy.ndarray
.The returned tensor and :attr:
ndarray
share the same memory. Modifications to the tensor will be reflected in the :attr:ndarray
and vice versa. The returned tensor is not resizable.
And this is taken from the doc of id()
:
Return the identity of an object.
This is guaranteed to be unique among simultaneously existing objects. (CPython uses the object's memory address.)
So here comes the question:
Since the ndarray arr
and tensor t
share the same memory, why do they have different memory addresses?
Any ideas/suggestions?