0

I'm running tf2.0 in a conda environment, and would like to display a tensor in a figure.

plt.imshow(tmp)
TypeError: Image data of dtype object cannot be converted to float

tmp.dtype
tf.float32

So I tried converting it to a numpy array, but...

print(tmp.numpy())
AttributeError: 'Tensor' object has no attribute 'numpy'

tmp.eval()
ValueError: Cannot evaluate tensor using `eval()`: No default session is registered. Use `with sess.as_default()` or pass an explicit session to `eval(session=sess)`

I've read elsewhere that this is because I need an active session or eager execution. Eager execution should be enabled by default in tf2.0, but...

print(tf.__version__)
2.0.0-alpha0

tf.executing_eagerly()
False

tf.enable_eager_execution()
AttributeError: module 'tensorflow' has no attribute 'enable_eager_execution'

tf.compat.v1.enable_eager_execution()
None

tf.executing_eagerly()
False

sess = tf.Session()
AttributeError: module 'tensorflow' has no attribute 'Session'

I tried upgrading to 2.0.0b1, but the results were exactly the same (except tf.__version__).

Edit:

according to this answer, the problems are probably because I am trying to debug a function which is inside a tf.data.Dataset.map() call, which work with static graphs. So perhaps the question becomes "how do I debug these functions?"

craq
  • 1,441
  • 2
  • 20
  • 39

1 Answers1

0

The critical insight for me was that running the tf.data.Dataset.map() function builds a graph, and the graph is executed later as part of a data pipeline. So it is more about code generation, and eager execution doesn't apply. Besides the lack of eager execution, building a graph has other restrictions, including that all inputs and outputs must be tensors. Tensors don't support item assignment operations such as T[0] += 1.

Item assignment is a fairly common use case, so there is a straightforward solution: tf.py_function (previously tf.py_func). py_function works with numpy arrays as inputs and outputs, so you're free to make use of other numpy functions which have not yet been included in the tensorflow library.

As usual, there is a trade-off: a py_function is interpreted on the fly by the python interpreter. So it won't be as fast as pre-compiled tensor operations. More importantly, the interpreter threads are not aware of each other, so there may be parallelisation issues.

There's a helpful explanation and demonstration of a py_function in the documentation: https://www.tensorflow.org/beta/guide/data

craq
  • 1,441
  • 2
  • 20
  • 39