My goal would be creating an app that can have multiple users. Each user account must be secured with facial identification of the app. I know I might not get the concept right for tensorflow, but is their a way in android that we can train the app to identify someone's face as to who this user is? Im under the impression that we have to create a training model beforehand and apply it on the app, but as for my goal the app will have to dynamically train to identify who are its users.Thanks in advance.
1 Answers
I'm not sure if this is the right way to do this. I know that it can be achieved by Eigenfaces, but I never tried it, so maybe you want to take that in consideration too.
Coming back to your idea. I don't know what are the odds for success, but it happens that I know a few places where you'll meet a lot of challenges:
- Dataset. For each face that you want to recognise you will need a lot of images from different angles and as different as possible (with glasses, different haircuts, beard, makeup, different lightning conditions, etc). If you fail to provide a detailed dataset, two things might happen: either a face that should be recognised will not be or the face shouldn't, but it is recognised in the end. A dataset like this is hard to create because you will have in the best case a few photos of the user that registers the face. I think that with this photos, you can work to generate new photos in different conditions, but this cannot be done on mobile.
- Assuming that you have a decent dataset, now you have to train the network. Here you have two options: built your model from the ground (not such a good idea) or use a model provided by Google and retrain only the final layer from the network. As far as I know,
TensorFlow
doesn't have an option to do the training on mobile (it would be to expensive for the system), so you'll have to train the model somewhere and then download it on device.TensorFlow
has a model MobileNet that is designed to be used on mobile devices, being a good starting point for your network, having a good accuracy and not using many system resources. You can also try with Inception, but this model is designed for accuracy, has a much longer training time and it spend more time and resources while evaluating an image.
The end scenario for your app is like this: a user registers his face by taking a few photos that are sent to your server. You then have to retrain the network each time a new face is added and downloaded the model inside your app. From here, things are easy, take a photo of the user and hope that its face is handled properly.
Maybe you want to have a look at some codelabs about TensorFlow
that teach you on how to train the model and run it on Android
.

- 2,595
- 4
- 23
- 31
-
I understand that the the training on mobile will be really expensive for the device itself. I never thought of having sending it over to the server and do the training there (thanks for the idea). Now the first problem would how do we get bunch of photos from the user. Users can be easily annoyed if app asks too much from them. Thank you for your response. helped out a lot really. – MetaSnarf Feb 03 '18 at 05:35
-
Have a look at how this video regarding face unlock in Android https://www.youtube.com/watch?v=PYVMIHONbMc . I guess that you can make a similar face-shape and ask the user to rotate its head in order to make the scan. While he does the rotation movement, you can take a few photos at different angles that should provide a starting point without annoying the user. – Iulian Popescu Feb 04 '18 at 18:08