1

I have to create N large matrices, of size M x M, with M = 100'000, on a cluster. I can create them one by one. Usually I would first define a tensor

mat_all = torch.zeros((N,M,M))

And then I would fill mat_all as follows:

for i in range(N):
    tmp = create_matrix(M,M)
    mat_all[i,:,:] = tmp

where the function create_matrix creates a square matrix of size M.

My problem is: if I do that, I have memory issue in creating the big tensor mat_all with torch.ones . I do not have these issues when I create the matrices one by one with create_matrix.

I was wondering if there is a way to have a tensor as mat_all which deals with N matrices MxM but in such a way that I do not have memory issues.

  • So by *creating the big tensor*, your very first line is the issue since you are assigning elements of tensor in loop? And what do you mean by *memory issues*? `MemoryError`? Did you isolate to these lines and not others in larger process? – Parfait Dec 26 '21 at 23:49

0 Answers0