0

I am experimenting with the quantization of a neural network in Tensorflow 1.1.

According to the documentation, the tanh operation supports floating point inputs as well as fixed point inputs of type qint32. However, I can't get this to work:

import tensorflow as tf

sess = tf.InteractiveSession()

x = tf.constant([1.,2.,3.], dtype=tf.float32)

from tensorflow.python.ops.gen_array_ops import quantize_v2
x_quant = quantize_v2(x, min_range=0., max_range=4., T=tf.qint32)

y_quant = tf.nn.tanh(x_quant[0])

The code yields an error message:

TypeError: Value passed to parameter 'x' has DataType qint32 not in list of 
allowed values: float16, float32, float64, complex64, complex128

Is there a way out or is it just a bug in the docs?

rerx
  • 1,133
  • 8
  • 19

1 Answers1

1

It's probably a bug in doc. According to the backend function _tanh in gen_math_ops.py:

def _tanh(x, name=None):
  r"""Computes hyperbolic tangent of `x` element-wise.

  Args:
    x: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`.
    name: A name for the operation (optional).

Since quantization is really new, perhaps the new version of _tanh is still in progress.

YLJ
  • 2,940
  • 2
  • 18
  • 29
  • After looking through the file history, I can confirm that this is a bug in the docs that was introduced back in July 2015, before we'd released and when the quantization definition wasn't completely nailed down. – Pete Warden Jun 01 '17 at 20:30
  • Thanks for clarifying this! So what would be the most straightforward way to quantize inference for a network that uses tanh activation function? – rerx Jun 02 '17 at 07:49