1

I have to admit, I'm a bit confused by the scatter* and index* operations - I'm not sure any of them do exactly what I'm looking for, which is very simple:

Given some 2-D tensor

z = tensor([[1., 1., 1., 1.],
            [1., 1., 1., 1.],
            [1., 1., 1., 1.]])

And a list (or tensor?) of 2-d indexes:

inds = tensor([[0, 0],
               [1, 1],
               [1, 2]])

I want to add a scalar to z at those indexes (and do it efficiently):

znew = z.something_add(inds, 3)
->
znew = tensor([[4., 1., 1., 1.],
               [1., 4., 4., 1.],
               [1., 1., 1., 1.]])

If I have to I can make that scalar a tensor of whatever shape (where all elements = 3), but I'd rather not...

Colin
  • 3,670
  • 1
  • 25
  • 36
  • 1
    In case anybody comes here to find a solution for overlapping indices, check this out: https://stackoverflow.com/a/65584479/3337089 – Nagabhushan S N Jan 05 '21 at 18:39

2 Answers2

3

You must provide two lists to your indexing. The first having the row positions and the second the column positions. In your example, it would be:

z[[0, 1, 1], [0, 1, 2]] += 3

torch.Tensor indexing follows Numpy. See https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#integer-array-indexing for more details.

Fábio Perez
  • 23,850
  • 22
  • 76
  • 100
  • I'm accepting yours because it works and you were first. I still have concerns though - will this be efficient? If both z and inds are gpu-resident tensors, can this all be done on the gpu? It seems like there should be some method on tensor that can do this, and do it efficiently... – Colin Sep 19 '19 at 21:02
  • I think it's supposed to be efficient, even if you are passing CPU lists. I'm not sure for huge indices arrays though. If you tensor is very big but you only care about some of the components, try [`torch.sparse`](https://pytorch.org/docs/stable/sparse.html). – Fábio Perez Sep 20 '19 at 13:17
1

This code achieves what you want:

z_new = z.clone() # copy the tensor
z_new[inds[:, 0], inds[:, 1]] += 3 # modify selected indices of new tensor

In PyTorch, you can index each axis of a tensor with another tensor.