1
x = tf.Placeholder(shape=[1,31,5,1])
def func(x):
    operations...
    return output

convolutionFunction = func(x)
sess = tf.Session()
gradientConv1 = gradientConv1 + sess.run(tf.gradients(tf.square(reward-convolutionFunction), weightsConv1))

gradientConv1 (numpy array of shape [2,2,1,32]) weightsConv1 (tensor variable of shape [2,2,1,32])

I'm getting an error such that "Placeholder should have a dtype of float and shape of [1,31,5,1]". It seems that it is showing me that I have not given a feed_dict to the function in sess.run? Please point me out to the error. Also is my way of differentiating with respect to each value correct.

reward is a scalar

sagar_acharya
  • 346
  • 4
  • 18
  • TensorFlow differentiation is performed using autodiff, not a symbolic technique. You cannot get gradient information without first providing a 1x31x5x1 tensor of values to place in the placeholder. – nanofarad Oct 06 '18 at 19:18
  • So basically, do I have to provide the point at which I want to find the gradient, how can I give it here? – sagar_acharya Oct 06 '18 at 19:22
  • Correct, you can pass it by adding a parameter to `sess.run` as follows: `feed_dict={x: POINT}` where `POINT` is typically given as a Python/numpy array. – nanofarad Oct 06 '18 at 19:28
  • Thank you so much Andrey! :) – sagar_acharya Oct 06 '18 at 19:35

1 Answers1

1
gradientConv1 = gradientConv1 + sess.run(tf.gradients(tf.square(reward-convolutionFunction), weightsConv1), feed_dict={x: <valueOfPlaceholder> })

where valueOfPlaceholder is the point at which we wish to evaluate the function

Thanks to Andrey Akhmetov for pointing this out!

sagar_acharya
  • 346
  • 4
  • 18