2

I am confused with what types of operations are supported for automatic differentiation in tf. Concretely, is tensor indexing operation as follows supported?

...
# feat is output from some conv layer and the shape is B*H*W*C

# case one
loss = feat[:,1:,1:,:] - feat[:,:-1,:-1,:]

# case two
feat[:,1:,1:,:] = feat[:,1:,1:,:]/2. # assign and replace part original value
loss = tf.reduce_sum(feat)
lhao0301
  • 1,941
  • 4
  • 20
  • 26

1 Answers1

0

This isn'ta direct answer, but as a clue, this automatic differentiation library autograd lists operations that are not supported, see Non-differentiable functions, for example floor(), round() are not auto differentiable.

One can also define their own operations, provided if you can code the gradients yourself, see extend-autograd-by-defining-your-own

I would guess tf is very similar to this.

Sida Zhou
  • 3,529
  • 2
  • 33
  • 48
  • As for indexing operation, my first guess is it's not auto differentiable. But then, maxpooling is auto differentiable, and maxpooling is just selecting a certain element, aka indexing operation. So, my 2nd guess is that indexing operation is auto differentiable, as least in theory. – Sida Zhou Oct 13 '20 at 02:01