I've encountered the same issue, and I've been trying to solve it with tensorflow==1.9.0
.
Code:
import numpy as np
import tensorflow as tf
tf.reset_default_graph()
batch = 2
dim = 3
hidden = 4
with tf.variable_scope('test', regularizer=tf.contrib.layers.l2_regularizer(0.001)):
lengths = tf.placeholder(dtype=tf.int32, shape=[batch])
inputs = tf.placeholder(dtype=tf.float32, shape=[batch, None, dim])
cell = tf.nn.rnn_cell.GRUCell(hidden)
cell_state = cell.zero_state(batch, tf.float32)
output, _ = tf.nn.dynamic_rnn(cell, inputs, lengths, initial_state=cell_state)
inputs_ = np.asarray([[[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3]],
[[6, 6, 6], [7, 7, 7], [8, 8, 8], [9, 9, 9]]],
dtype=np.int32)
lengths_ = np.asarray([3, 1], dtype=np.int32)
this_throws_error = tf.losses.get_regularization_loss()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
output_ = sess.run(output, {inputs: inputs_, lengths: lengths_})
print(output_)
print(sess.run(this_throws_error))
This is the result of running the code:
...
File "/Users/piero/Development/mlenv3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_util.py", line 314, in CheckInputFromValidContext
raise ValueError(error_msg + " See info log for more details.")
ValueError: Cannot use 'test/rnn/gru_cell/gates/kernel/Regularizer/l2_regularizer' as input to 'total_regularization_loss' because 'test/rnn/gru_cell/gates/kernel/Regularizer/l2_regularizer' is in a while loop. See info log for more details.
Then I tried to put the dynamic_rnn
call outside of the variable scope:
import numpy as np
import tensorflow as tf
tf.reset_default_graph()
batch = 2
dim = 3
hidden = 4
with tf.variable_scope('test', regularizer=tf.contrib.layers.l2_regularizer(0.001)):
lengths = tf.placeholder(dtype=tf.int32, shape=[batch])
inputs = tf.placeholder(dtype=tf.float32, shape=[batch, None, dim])
cell = tf.nn.rnn_cell.GRUCell(hidden)
cell_state = cell.zero_state(batch, tf.float32)
output, _ = tf.nn.dynamic_rnn(cell, inputs, lengths, initial_state=cell_state)
inputs_ = np.asarray([[[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3]],
[[6, 6, 6], [7, 7, 7], [8, 8, 8], [9, 9, 9]]],
dtype=np.int32)
lengths_ = np.asarray([3, 1], dtype=np.int32)
this_throws_error = tf.losses.get_regularization_loss()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
output_ = sess.run(output, {inputs: inputs_, lengths: lengths_})
print(output_)
print(sess.run(this_throws_error))
In theory this should be fine, as the regularization applies to the weights of the rnn that should contain variables initialized when the rnn cells are created.
This is the output:
[[[ 0. 0. 0. 0. ]
[ 0.1526176 0.33048663 -0.02288104 -0.1016309 ]
[ 0.24402776 0.68280864 -0.04888818 -0.26671126]
[ 0. 0. 0. 0. ]]
[[ 0.01998052 0.82368904 -0.00891946 -0.38874635]
[ 0. 0. 0. 0. ]
[ 0. 0. 0. 0. ]
[ 0. 0. 0. 0. ]]]
0.0
So placing the dynami_rnn
call outside the variable scope works, in the sense that does not return errors, but the value of the loss is 0, suggesting that it's actually not really considering any weight from the rnn to compute the l2 loss.
Then I tried with tensorflow==1.12.0
.
This is the output for the first script with dynamic_rnn
inside the scope:
[[[ 0. 0. 0. 0. ]
[-0.17653276 0.06490126 0.02065791 -0.05175343]
[-0.413078 0.14486027 0.03922977 -0.1465032 ]
[ 0. 0. 0. 0. ]]
[[-0.5176822 0.03947531 0.00206934 -0.5542746 ]
[ 0. 0. 0. 0. ]
[ 0. 0. 0. 0. ]
[ 0. 0. 0. 0. ]]]
0.010403235
And this is the output with dynamic_rnn
outside the scope:
[[[ 0. 0. 0. 0. ]
[ 0.04208181 0.03031874 -0.1749279 0.04617848]
[ 0.12169671 0.09322995 -0.29029205 0.08247502]
[ 0. 0. 0. 0. ]]
[[ 0.09673716 0.13300316 -0.02427006 0.00156245]
[ 0. 0. 0. 0. ]
[ 0. 0. 0. 0. ]
[ 0. 0. 0. 0. ]]]
0.0
The fact that the version with dynamic_rnn inside the scope returns a non-zero value suggests that it is working correctly, while in the other case, the returned value 0 suggests that it is not behaving as expected.
So the bottom line is: this was a bug in tensorflow
that they solved between version 1.9.0
and version 1.12.0
.