0

I've read that I can see contents of tf variables by using tf.print inside my tf.function definition. It doesn't work. My __tf.version__ is 2.5.0. I run the following function inside Jupyter notebook:

tf.compat.v1.disable_eager_execution()

@tf.function
def tf_fun(inputs):
    x = tf.strings.lower(inputs)
    tf.print(x)
    return x
  • without print
tf_fun(inputs)
<tf.Tensor 'StatefulPartitionedCall_6:0' shape=(7,) dtype=string>
  • with print
print(tf_fun(inputs))
Tensor("StatefulPartitionedCall_5:0", shape=(7,), dtype=string)

I want eager execution to be disabled because I use some functions from tf.Transform module that works only in graph mode somewhere else in this notebook. How can I see the contents of tensor to make sure that my function produces exactly what I want?

Another problem (less important) is that if I try to assing returned value to a variable for further processing tf.print prints anything only when it's the first call to tf_fun (I know it has something to do with tracing, but I don't understand it and would like to know how to fix it.)

Edit: after adding something from tf.Transorm module I got an error.

@tf.function
def transform_product1(inputs, top_k_products):
    product = tf.strings.lower(inputs)
    product = tf.strings.reduce_join(product, -1, separator = ' ')
    product = tft.vocabulary(product, top_k= 4)
    return product

prod = transform_product1(inputs,4)
sess = tf.compat.v1.Session() 
print(sess.run(prod))
InvalidArgumentError: You must feed a value for placeholder tensor 'PartitionedCall_7/vocabulary/temporary_analyzer_output_1/Placeholder' with dtype string
     [[{{node vocabulary/temporary_analyzer_output_1/Placeholder}}]]
Brzoskwinia
  • 371
  • 2
  • 11
  • With disabling eager execution you need to run a session to trigger graph. Like this: `a=tf_fun(inputs) sess=tf.compat.v1.Session() print(sess.run(a))`. Without disabling eager-mode and just with @tf.function decorator you can use `tf.print` with no problem. – Kaveh Aug 17 '21 at 08:57
  • @Kaveh Thanks! Unfortunately I have further problems - I described them in edit – Brzoskwinia Aug 17 '21 at 09:37

1 Answers1

0

You can also try force_tf_compat_v1=True to run the legacy tf.Transform and try to print the function's return by calling the function outside tf.function. Or for the part where you are using tf.transform you can wrap it using tf.Graph as below.

g = tf.Graph()
with g.as_default():
  # Define operations and tensors in `g`.
  c = tf.constant(30.0)