0

I created a tf.estimator model and a tf.data pipeline in Python and saved it in tf.saved_model format in TF 2.1. As tfjs-node does not support int64 or float64 types, it is unable to load the model.

On Tensorboard, I observed some input pipeline Python variables are auto-declared as 64 bit types.

enter image description here

For example, batch_size and epochs above. How can I avoid this problem and load tf.estimator model in tfjs-node without conversion?

To reproduce,

Nitin
  • 7,187
  • 6
  • 31
  • 36
  • One possible workaround is to train the model using keras API and save it in saved model format. Loading that in tfjs-node did not give any problems – Nitin Feb 25 '20 at 07:23

1 Answers1

1

Since INT64 and FLOAT64 are not supported in tfjs-node, the model can not be directly loaded and executed if the input/output tensor dtypes are INT64/FLOAT64. one workaround is to wrap the model with a tensor casting function:

  1. in python, load the original model, and create a new model whose input tensor is INT32 or FLOAT32
  2. in the new model convert the input tensor to DTYPE INT64/FLOAT64
  3. feed the converted tensor to the original model, and return the model output (convert the value dtype if needed)
  4. export this new model as TF SavedModel

so with this wrapping, the model's input/output dtypes are supported in tfjs-node, though it might lose some accuracy.

  • Thank you Kangyi. Will try this workaround to unblock myself. Any plans to handle this automatically inside tfjs-node's `loadSavedModel` API? – Nitin Mar 26 '20 at 21:36
  • It has not been planned yet, because it requires the node binding to parse the model input/output to decide whether or not to cast the datatype, which is not supported yet. Long term it is a good feature to add as obviously it will unblock a lot of users whose models are using unsupported datatypes. We will discuss when planning future roadmap. – Kangyi Zhang Mar 31 '20 at 18:22