45

Typically to use a TF graph, it is necessary to convert raw data to numerical values. I refer to this process as a pre-processing step. For example, if the raw data is a sentence, one way is to do this is to tokenize the sentence and map each word to a unique number. This preprocessing creates a sequence of number for each sentence, which will be the input of the model.

We need also to post-process the output of a model to interpret it. For example, converting a sequence of numbers generated by the model to words and then building a sentence.

TF Serving is a new technology that is recently introduced by Google to serve a TF model. My question is that:

Where should pre-processing and post-processing be executed when a TF model is served using TensorFlow serving?

Should I encapsulate pre-processing and post-processing steps in my TF Graph (e.g. using py_fun or map_fn) or there is another TensorFlow technology that I am not aware of.

MajidL
  • 731
  • 6
  • 11
  • 2
    py_fun cannot be a solution as mentioned [here](https://www.tensorflow.org/api_docs/python/tf/py_func): The tf.py_func() operation has the following known limitations: The body of the function (i.e. func) will not be serialized in a GraphDef. Therefore, you should not use this function if you need to serialize your model and restore it in a different environment. – MajidL Sep 29 '17 at 19:04
  • Did you ever resolve this? It doesn't appear as if `tf.transform` can support sophisticated tokenization yet. – Luke Nov 17 '17 at 18:56
  • 2
    From my understanding, there is no easy way to handle this issue. Ideally, you want to use Tensorflow ops (or adding necessary ones) to implement pre/post processing steps and use tf.transform to ship these ops along with your TF graph. However, as you may guess, adding a new TF OPs is not a trivial task and this adds a lot of limitation for implementing pre/post-processing steps. For sure you can always do pre/post processing outside graph, but this is not an ideal solution. – MajidL Nov 21 '17 at 22:30
  • 3
    This is my number one problem with using tensorflow-serving right now. I made an [issue](https://github.com/tensorflow/serving/issues/663) in tensorflow-serving on this topic. – Luke Nov 22 '17 at 23:30

1 Answers1

5

I'm running over the same issue here, even if I'm not 100% sure yet on how to use the wordDict variable (I guess you use one too to map the words with its ids), the main pre-process and post-process functions are defined here:

https://www.tensorflow.org/programmers_guide/saved_model

as export_outputs and serving_input_receiver_fn.

  • exports_outputs

Needs to be defined in EstimatorSpec if you are using estimators. Here is an example for a classification algorithm

  predicted_classes = tf.argmax(logits, 1)
  categories_tensor = tf.convert_to_tensor(CATEGORIES, tf.string)
  export_outputs = { "categories": export_output.ClassificationOutput(classes=categories_tensor) }
  if mode == tf.estimator.ModeKeys.PREDICT:
    return tf.estimator.EstimatorSpec(
        mode=mode,
        predictions={
            'class': predicted_classes,
            'prob': tf.nn.softmax(logits)
        },
        export_outputs=export_outputs)
  • serving_input_receiver_fn

It needs to be defined on before exporting the trained estimator model, it assumes the input is a raw string and parses your input from there, you can write your own function but I'm unsure whenever you can use external variables. Here is a simple example for a classification algorithm:

def serving_input_receiver_fn():
    feature_spec = { "words": tf.FixedLenFeature(dtype=tf.int64, shape=[4]) }
    return tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)()

  export_dir = classifier.export_savedmodel(export_dir_base=args.job_dir,
                                            serving_input_receiver_fn=serving_input_receiver_fn)

hope it helps.

andresbravog
  • 583
  • 5
  • 9
  • Thanks for the solution. From what I understood, this approach saves pre-processing and post-processing steps in the graph. This means that these steps should constraint to use only TensorFlow Ops. This is a big constraint and makes it hard in some application. For example for text tokenization, one simple approach is to use a regex to split a text into tokens. Converting this to Tensorflow Ops is not easy task. – MajidL Sep 29 '17 at 17:23
  • true but seems to be the way to go with text processing algorithms as described here https://medium.com/towards-data-science/how-to-do-text-classification-using-tensorflow-word-embeddings-and-cnn-edae13b3e575 by google. – andresbravog Oct 02 '17 at 12:14
  • Thank you for the link, it is an interesting article. – MajidL Oct 03 '17 at 14:01
  • However, as far as I understood, in the code, it is assumed that the preprocessing has been done outside of tensorflow. Check this code: `tf.constant('Some title'.split())` for example, where tokenizes the input text. – MajidL Oct 03 '17 at 17:43
  • I was able to put a solution together with this code. The processing is being made with the algorithm in `cnn_model` function first lines, the exported model contains the word dict in `exports/Servo/{id}/assets` and works as expected with text input. – andresbravog Oct 04 '17 at 05:58
  • Thanks a lot. I think that I have all the ingredients now, I just need to put them together. One important aspect that only mentioned in the blog post is [tf.transform](https://github.com/tensorflow/transform). For my problem, I have to use tf.transform for tokenization as the current implementation cannot handle a more sophisticated tokenizer (e.g. separating punctuations). – MajidL Oct 05 '17 at 21:11