The tf.assign()
operator is the underlying mechanism that implements the Variable.assign()
method. It takes a mutable tensor (with tf.*_ref
type) and a new value, and returns a mutable tensor that has been updated with the new value. The return value is provided to make it easier to order an assignment before a subsequent read, but this feature is not well documented. An example will hopefully illustrate:
v = tf.Variable(0)
new_v = v.assign(10)
output = v + 5 # `v` is evaluated before or after the assignment.
sess.run(v.initializer)
result, _ = sess.run([output, new_v.op])
print result # ==> 10 or 15, depending on the order of execution.
v = tf.Variable(0)
new_v = v.assign(10)
output = new_v + 5 # `new_v` is evaluated after the assignment.
sess.run(v.initializer)
result = sess.run([output])
print result # ==> 15
In your code example the dataflow dependencies enforce the order of execution [read counter] -> new_counter = tf.add(...) -> tf.assign(...) -> [read output of assign] -> result = tf.add(...)
, which means that the semantics are unambiguous. However, the read-modify-write steps to update the counter are somewhat inefficient, and can have unexpected behavior when there are multiple steps running concurrently. For example, multiple threads accessing the same variable could observe the counter moving backwards (in the case that an older value was written back after a newer value).
I would recommend that you use Variable.assign_add()
to update the counter, as follows:
counter = tf.Variable(0, name="counter")
one = tf.constant(1)
ten = tf.constant(10)
# assign_add ensures that the counter always moves forward.
updated_counter = counter.assign_add(one, use_locking=True)
result = tf.add(updated_counter, ten)
# ...