0

I have just started experimenting with Deep Learning and Computer Vision technologies. I came across this awesome tutorial. I have setup the TensorFlow environment using docker and trained my own sets of objects and it provided greater accuracy when I tested it out.

Now I want to make the same more real-time. For example, instead of giving an image of an object as the input, I want to utilize a webcam and make it recognize the object with the help of TensorFlow. Can you guys guide me with the right place to start with this work?

Sivaprasanna Sethuraman
  • 4,014
  • 5
  • 31
  • 60

1 Answers1

0

You may want to look at TensorFlow Serving so that you can decouple compute from sensors (and distribute the computation), or our C++ api. Beyond that, tensorflow was written emphasizing throughput rather than latency, so batch samples as much as you can. You don't need to run tensorflow at every frame, so input from a webcam should definitely be in the realm of possibilities. Making the network smaller, and buying better hardware are popular options.

drpng
  • 1,637
  • 13
  • 14
  • I thought TensorFlow Serving might be required. But for that we need to export our model in the first place, right? How can I export the model that I have retrained using TensorFlow (with the help of the Tutorial I mentioned in the question)? – Sivaprasanna Sethuraman Nov 13 '16 at 08:34
  • Yes, you can just export that, then the tensorflow server will be able to run the model on your input tensors. – drpng Nov 13 '16 at 16:13
  • That's the question. How can I export the model? I got a .pb file as an output. What should I do? – Sivaprasanna Sethuraman Nov 14 '16 at 14:06
  • You should have gotten a checkpoint, if following [the instructions](https://tensorflow.github.io/serving/serving_basic). – drpng Nov 14 '16 at 16:13