-1

I'm trying to build an app to make image recognition using the phone camera... I saw a lot of videos where using the camara the app identify where is the person or which feelings they have or things like that in real time.

I need to do a built an app like this, I know it's not an easy task, but I need to know which technologies can be use in order to achieve this in a mobile app? Is it tensor flow? Are there some libraries that helps to achieve this? Or do I need to build a full Machine Learning with IA app?

Sorry to make such a general question but I need some insights.

Redgards

Faabass
  • 1,394
  • 8
  • 29
  • 58

1 Answers1

0

If you are trying to do this for the iOS platform, you could use a starter kit here: https://developer.ibm.com/patterns/build-an-ios-game-powered-by-core-ml-and-watson-visual-recognition/ for step-by-step instructions.

https://github.com/IBM/rainbow is a repo which it references.

You train your vision model on the IBM Cloud using Watson Visual Recognition, which just needs example images to learn from. Then you download the model into your iOS app and deploy with XCode. It will "scan" the live camera feed for the classes defined in your model.

I see you tagged TF (which is not part of this starter kit) but if you're open to other technologies, I think it would be very helpful.

Matt Hill
  • 1,081
  • 6
  • 4
  • Is this solution going to work offline or so I need to send the images to IBM to analized and then to respond back? – Faabass Apr 04 '20 at 16:36
  • 1
    It works offline. The model is trained on the IBM Cloud, then you download it and works solely on the device. – Matt Hill Apr 04 '20 at 20:21