I am trying to do the exhaustive concatenation between the tensors. So, for example, I have tensor:
a = torch.randn(3, 512)
I want to concatenate like concat(t1,t1),concat(t1,t2), concat(t1,t3), concat(t2,t1), concat(t2,t2)....
As a naive solution,
I have used for
loop:
ans = []
result = []
split = torch.split(a, [1, 1, 1], dim=0)
for i in range(len(split)):
ans.append(split[i])
for t1 in ans:
for t2 in ans:
result.append(torch.cat((t1,t2), dim=1))
The issue is that each epoch is taking very long time and the code is slow. I tried the solution posted in question on PyTorch: How to implement attention for graph attention layer but this gives a memory error.
t1 = a.repeat(1, a.shape[0]).view(a.shape[0] * a.shape[0], -1)
t2 = a.repeat(a.shape[0], 1)
result.append(torch.cat((t1, t2), dim=1))
I am sure there is a faster way, but I was unable to figure it out.