0

I have a model created in Tensorflow that is already trained and accurate. How do I get this to run in a project? I can load the model but I can't figure out how to feed it a single image that my software is generating.

Also if this was a transfer learning project do I have to load and create a model before loading the weights?

All the tutorials are on how to set it up in the cloud or with a local server which I would like to avoid. I am tempted to save the data and then run it but that is a lot slower.

Addition: My environment I am building this for is a google colab Jupyter notebook. The idea is to have no installation for users which is why it must be self contained.

  • You will have to take a look at both [Tensorflow Serving](https://www.tensorflow.org/tfx/guide/serving) and Docker. – tornikeo Sep 13 '20 at 20:31
  • I have been looking but why is this difficult to find info on? Is it not a normal thing to run a model for inferencing in a software program? – Charles Curt Sep 14 '20 at 15:21
  • It's not hard, per se. It just requires knowledge of systems outside of python language. As an example, if you want to run inference in browser, you'll have to learn tensorflow.js. – tornikeo Sep 14 '20 at 17:25
  • @CharlesCurt, Please find the **`End To End Tutorial`**,https://colab.sandbox.google.com/github/tensorflow/tfx/blob/master/docs/tutorials/serving/rest_simple.ipynb, which demonstrates how to perform `Inference` of an `Image` on `Tensorflow Saved Model` using `Tensorflow Serving`, in the `Google Colab Environment`. –  Nov 01 '20 at 15:11

0 Answers0