1

I'm not sure if this is a Tensorflow bug or my misunderstanding about what this function is supposed to do, but I can't get tf.py_function to return an EagerTensor while in graph mode. Consequently, calling .numpy() on the output of this function fails.

The issue can be reproduced using the exact example given in the official documentation (https://www.tensorflow.org/api_docs/python/tf/py_function):

import tensorflow as tf

tf.compat.v1.disable_eager_execution()

def log_huber(x, m):
  if tf.abs(x) <= m:
    return x**2
  else:
    return m**2 * (1 - 2 * tf.math.log(m) + tf.math.log(x**2))

x = tf.constant(1.0)
m = tf.constant(2.0)

with tf.GradientTape() as t:
  t.watch([x, m])
  y = tf.py_function(func=log_huber, inp=[x, m], Tout=tf.float32)

dy_dx = t.gradient(y, x)
assert dy_dx.numpy() == 2.0

This generates the following error:

Traceback (most recent call last):
  File "<input>", line 17, in <module>
  File "C:\Users\...\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\framework\ops.py", line 446, in __getattr__
    self.__getattribute__(name)
AttributeError: 'Tensor' object has no attribute 'numpy'

About version

I am running Python 3.8 and Tensorflow v2.9.1.

Any help would be greatly appreciated!

42bsk
  • 76
  • 1
  • 10

1 Answers1

1

Solution 1 (with eager execution):

In Tensorflow 2, eager execution should be enabled by default.

I reproduced the exact same code as the Tensorflow tutorial without any problem (the assertion does not generate errors). I used Colab under Tensorflow 2.8.2 and Python 3.7.13.

If you have problems you could try setting tf.config.run_functions_eagerly(True), but really it should work even without this stuff.

Solution 2 (without eager execution):

If you want to keep eager execution disabled, you can work with sessions (more info about sessions). Instead of calling .numpy() you should call .eval() on your Tensor and wrap everything in a session. That's it.

tf.compat.v1.disable_eager_execution()

def log_huber(x, m):
  if tf.abs(x) <= m:
    return x**2
  else:
    return m**2 * (1 - 2 * tf.math.log(m) + tf.math.log(x**2))

x = tf.constant(1.0)
m = tf.constant(2.0)

# Launch the graph in a session.
sess = tf.compat.v1.Session()

with tf.GradientTape() as t:
  t.watch([x, m])
  y = tf.py_function(func=log_huber, inp=[x, m], Tout=tf.float32)

with sess.as_default():
  dy_dx = t.gradient(y, x)
  assert dy_dx.eval() == 2.0
  print(dy_dx.eval())

sess.close()
ClaudiaR
  • 3,108
  • 2
  • 13
  • 27
  • Thanks for the reply, but my question is about getting it to run with eager execution *disabled*. Sorry if this wasn't clear in the initial post. If you copy-paste the example from the tensorflow docs without adding tf.compat.v1.disable_eager_execution(), it runs fine, of course. But the point of py_function is to execute a function eagerly while in graph mode. – 42bsk Jul 31 '22 at 13:39
  • Sorry, my bad, I misunderstood. You should be able to execute without eager mode with the second solution. I updated the answer @42bsk – ClaudiaR Jul 31 '22 at 14:01
  • No problem, and thanks again! Unfortunately eval() does not work for my problem, which is a whole other story and I'll save that for a different post... So, are you saying that tf.py_function does not execute eagerly when eager execution is disabled? I thought that was the whole point of the function: from the doc: "Wraps a python function into a TensorFlow op that executes it eagerly." Have I misunderstood what this means? – 42bsk Aug 01 '22 at 00:42
  • 1
    @42bsk I think that `tf.py_function` only executes the wrapped code, that is the code inside, eagerly. This allows to use Python constructs inside, but not on the output of the wrapped function – ClaudiaR Aug 01 '22 at 04:31
  • I upvoted your answer since I appreciate anyone trying to help, but unfortunately this did not solve my issue. I tried moving all non-tensorflow code inside the py_function wrapper, including printing/saving intermediate output to a file, but when global eager execution is disabled, the non-tensorflow code is simply skipped and not executed at all. Despite what is written in the docs -- "You can also use tf.py_function to debug your models at runtime using Python tools" -- breakpoints inside the wrapped function are also skipped when eager execution is disabled. – 42bsk Aug 05 '22 at 15:05