4

I am new to tensorflow and I am building a network but failing to compute/apply the gradients for it. I get the error:

ValueError: No gradients provided for any variable: ((None, tensorflow.python.ops.variables.Variable object at 0x1025436d0), ... (None, tensorflow.python.ops.variables.Variable object at 0x10800b590))

I tried using a tensorboard graph to see if there`s was something that made it impossible to trace the graph and get the gradients but I could not see anything.

Here`s part of the code:

sess = tf.Session()

X = tf.placeholder(type, [batch_size,feature_size])

W = tf.Variable(tf.random_normal([feature_size, elements_size * dictionary_size]), name="W")

target_probabilties = tf.placeholder(type, [batch_size * elements_size, dictionary_size])

lstm = tf.nn.rnn_cell.BasicLSTMCell(lstm_hidden_size)

stacked_lstm = tf.nn.rnn_cell.MultiRNNCell([lstm] * number_of_layers)

initial_state = state = stacked_lstm.zero_state(batch_size, type)

output, state = stacked_lstm(X, state)

pred = tf.matmul(output,W)
pred = tf.reshape(pred, (batch_size * elements_size, dictionary_size))

# instead of calculating this, I will calculate the difference between the target_W and the current W
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(target_probabilties, pred)

cost = tf.reduce_mean(cross_entropy)

optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)



sess.run(optimizer, feed_dict={X:my_input, target_probabilties:target_prob})

I will appreciate any help on figuring this out.

Michel
  • 133
  • 1
  • 8
  • Where is NanoporeTensor defined in your code? – Phillip Bock Aug 05 '16 at 06:07
  • I am sorry, I forgot to take that out when I wrote down the code here. It is not actually in this code here, but in the original one, it should not matter for this. I edited it already. – Michel Aug 05 '16 at 06:54
  • 1
    Is this the real code? You do have that line `sess.run(optimizer, feed_dict={X:my_input, target_probabilties:target_prob})` inside a loop where you actually feed something into the my_input- and target_prob-placeholders, right? – Phillip Bock Aug 05 '16 at 07:11
  • this is not the real code. I do have sess.run inside a loop where I feed all the inputs. I simplified the code because I thought this way would be better to understand the problem, I did not see how the rest would be important. I could upload the code if it`s actually needed. – Michel Aug 05 '16 at 07:22
  • No, all is good. Just missed the input-loop. Anyways, I always have the tf.nn.softmax_cross_entropy_with_logits() used so that I have the logits as first argument and the labels as second. Can you try this? – Phillip Bock Aug 05 '16 at 07:43
  • Oh my god. I cannot believe this was the problem. I spent so much on this and it was such a small thing. Thank you very much. – Michel Aug 05 '16 at 08:24
  • Cool ;-) Can you click this. I will post this as the reply. Can you click it as correct then? Thx :) – Phillip Bock Aug 05 '16 at 09:32

1 Answers1

4

I always have the tf.nn.softmax_cross_entropy_with_logits() used so that I have the logits as first argument and the labels as second. Can you try this?

Phillip Bock
  • 1,879
  • 14
  • 23