3

I've been using AutoML Vision Edge for some image classification tasks with great results when exporting the models in TFLite format. However, I just tried exporting the saved_model.pb file and running it with Tensorflow 2.0 and seem to be running into some issues.

Code snippet:

import numpy as np
import tensorflow as tf
import cv2

from tensorflow import keras

my_model = tf.keras.models.load_model('saved_model')
print(my_model)
print(my_model.summary())

'saved_model' is the directory containing my downloaded saved_model.pb file. Here's what I'm seeing:

2019-10-18 23:29:08.801647: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2019-10-18 23:29:08.829017: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7ffc2d717510 executing computations on platform Host. Devices: 2019-10-18 23:29:08.829038: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version Traceback (most recent call last): File "classify_in_out_tf2.py", line 81, in print(my_model.summary()) AttributeError: 'AutoTrackable' object has no attribute 'summary'

I'm not sure if it's an issue with how I'm exporting the model, or with my code to load the model, or if these models aren't compatible with Tensorflow 2.0, or some combination.

Any help would be greatly appreciated!

Matt Schwartz
  • 143
  • 2
  • 8
  • Just to be sure, did you use the upgrade script [1] or did you make the changes manually? [1]: https://www.tensorflow.org/guide/upgrade – Gurkomal Oct 22 '19 at 18:36
  • @Gurkomal the model was generated using Google's AutoML tool and exported as a saved model according to this documentation: https://cloud.google.com/vision/automl/docs/export-edge I'm not fully familiar with the TF 2 upgrade process...do you know if it's possible to simply upgrade the exported saved model, or would I need to update the actual model code? – Matt Schwartz Oct 23 '19 at 01:45
  • 1
    fwiw, I reached out to the AutoML team, and they said the service isn't designed to export saved models that work outside the docker container set up currently. if anyone knows how to take the exported saved model and modify it to work outside the docker container, that would be very helpful. thanks! – Matt Schwartz Nov 06 '19 at 21:18

1 Answers1

11

I've got my saved_model.pb working outside of the docker container (for object detection, not classification - but they should be similar, change the outputs and maybe the inputs for tf 1.14), here is how:

tensorflow 1.14.0:

image encoded as bytes

import cv2
import tensorflow as tf
cv2.imread(filepath)
flag, bts = cv.imencode('.jpg', img)
inp = [bts[:,0].tobytes()]
with tf.Session(graph=tf.Graph()) as sess:
    tf.saved_model.loader.load(sess, ['serve'], 'directory_of_saved_model')
    graph = tf.get_default_graph()
    out = sess.run([sess.graph.get_tensor_by_name('num_detections:0'),
            sess.graph.get_tensor_by_name('detection_scores:0'),
            sess.graph.get_tensor_by_name('detection_boxes:0'),
            sess.graph.get_tensor_by_name('detection_classes:0')],
           feed_dict={'encoded_image_string_tensor:0': inp})

image as numpy array

import cv2
import tensorflow as tf
import numpy as np
with tf.Session(graph=tf.Graph()) as sess:
    tf.saved_model.loader.load(sess, ['serve'], 'directory_of_saved_model')
    graph = tf.get_default_graph()
    # Read and preprocess an image.
    img = cv2.imread(filepath)
    # Run the model
    out = sess.run([sess.graph.get_tensor_by_name('num_detections:0'),
                    sess.graph.get_tensor_by_name('detection_scores:0'),
                    sess.graph.get_tensor_by_name('detection_boxes:0'),
                    sess.graph.get_tensor_by_name('detection_classes:0')],
                   feed_dict={'map/TensorArrayStack/TensorArrayGatherV3:0': img[np.newaxis, :, :, :]})                                                         

I used netron to find my input.

tensorflow 2.0:

import cv2
import tensorflow as tf
img = cv2.imread('path_to_image_file')
flag, bts = cv2.imencode('.jpg', img)
inp = [bts[:,0].tobytes()]
loaded = tf.saved_model.load(export_dir='directory_of_saved_model')
infer = loaded.signatures["serving_default"]
out = infer(key=tf.constant('something_unique'), image_bytes=tf.constant(inp))
Carlos B
  • 456
  • 3
  • 11
shortcipher3
  • 1,292
  • 9
  • 22
  • thanks for the thorough response - cool to hear others trying to do this! two quick follow ups: 1) when looking at the graph in netron...it's pretty unwieldy. any suggestions on finding the right input/output sections? 2) do you know a way to run the TF 2.0 version without the tobytes() conversion? – Matt Schwartz Nov 08 '19 at 04:23
  • 1
    figured it might help to just share the model file https://github.com/matt-virgo/TF_saved_model_test If you can help me make sense of what's happening in the graph, that would be a huge help. Thanks! – Matt Schwartz Nov 08 '19 at 17:02
  • 3
    1) In the process of the tf 2.0 example I found a simpler way to find the inputs/outputs is from tensorflow 2.0 call `print(infer.inputs)` `print(infer.outputs)` - this gives in your case as inputs `Placeholder:0` and `Placeholder_1:0`. The first is an image, the second would be a key name and is only required if you want the key out – shortcipher3 Nov 09 '19 at 02:36
  • 1
    2) I don't know how to use a numpy array in tf 2.0, I haven't figured out how to specify using an alternate input, the input should be the same as the tf 1.14 example - if you figure it out let me know – shortcipher3 Nov 09 '19 at 02:36