Given an 1D tensor:
tensor([1, 2, 3, 4, 5, 6.])
the expected result is:
tensor([-1., -2., -3., -4., -5., -1., -2., -3., -4., -1., -2., -3., -1., -2., -1.])
The 'nearest' solution using the tensor library is to calculate the 1-norm distance. However this returns the absolute values, not the differences:
import torch
x = torch.tensor([[1], [2], [3], [4], [5], [6.]])
torch.pdist(x, p=1)
# tensor([1., 2., 3., 4., 5., 1., 2., 3., 4., 1., 2., 3., 1., 2., 1.])
Other approach is to broadcast. However, this results in a square matrix, including meaningless lower triangular values:
x = torch.tensor([1, 2, 3, 4, 5, 6.])
x[:, None] - x[None, :]
# tensor([[ 0., -1., -2., -3., -4., -5.],
# [ 1., 0., -1., -2., -3., -4.],
# [ 2., 1., 0., -1., -2., -3.],
# [ 3., 2., 1., 0., -1., -2.],
# [ 4., 3., 2., 1., 0., -1.],
# [ 5., 4., 3., 2., 1., 0.]])
In this case, a function like scipy's squareform would help, but I am not sure whether it will broke the differentiability of the loss function which includes this step.
Even another approach is to employ a python loop, subtracting one element at a time from a slice starting from the next element until the last one. Finally, concatenating all results in a single tensor. However, I suppose this is not worth trying because of slowness and will probably mess with gradient calculations.