Use torch.as_tensor
instead of torch.load
, and you won't have to create a buffer.
See this question and this answer.
If you want the pytorch tensor to be a copy of your numpy array, use torch.tensor(arr)
. If you want the torch.Tensor to share the same memory buffer, then use torch.as_tensor(arr)
. PyTorch will then reuse the buffer if it can.
If you really wanna make a buffer from your numpy array, use the BytesIO class from io and initialize it with arr.tobytes()
like stream = io.BytesIO(arr.tobytes())
. YMMV though; I just tried torch.load
with a stream object from this and torch complained:
import io
import numpy as np
a = np.array([3, 4, 5])
stream = io.BytesIO(a.tobytes()) # implements seek()
torch.load(stream)
---------------------------------------------------------------------------
UnpicklingError Traceback (most recent call last)
...
UnpicklingError: invalid load key, '\x03'.
If you want to get that to work, you probably have to adjust the bytestream that numpy is generating. Good luck.