I am struggling with my Tensorflow model. I have trained it using tf.PaddingFIFOQueue and then I mainly used this tutorial: https://blog.metaflow.fr/tensorflow-how-to-freeze-a-model-and-serve-it-with-a-python-api-d4f3596b3adc#.dykqbzqek, for freezing the graph with its variables and later on loading it into a library for inferencing the model.
My problem is that I don't really know how to run the model for predicting once it is loaded. In the case of just a placeholder as an input it is just required to get the input and output variables and then run the model:
# We load the graph
graph_path = ...
graph = load_graph(graph_path)
# We launch a Session
with tf.Session(graph=graph) as sess:
# Note: we didn't initialize/restore anything, everything is stored in the graph_def
y_out = sess.run(y, feed_dict={
x: [[3, 5, 7, 4, 5, 1, 1, 1, 1, 1]] # < 45
})
print(y_out) # [[ False ]] Yay, it works!
In this example it really looks straightforward but for the use case with input pipeline I didn't really figure out how to make it work. I didn't even find anything related. I some one could give me a hint how this should be done, or how people normally use Tensorflow in production would be really helpful.