I have to create N large matrices, of size M x M, with M = 100'000, on a cluster. I can create them one by one. Usually I would first define a tensor
mat_all = torch.zeros((N,M,M))
And then I would fill mat_all
as follows:
for i in range(N):
tmp = create_matrix(M,M)
mat_all[i,:,:] = tmp
where the function create_matrix
creates a square matrix of size M.
My problem is: if I do that, I have memory issue in creating the big tensor mat_all
with torch.ones
. I do not have these issues when I create the matrices one by one with create_matrix
.
I was wondering if there is a way to have a tensor as mat_all
which deals with N matrices MxM but in such a way that I do not have memory issues.