Questions tagged [firebase-mlkit]

[DEPRECATED, use `firebase-machine-learning` or `google-mlkit` instead] ML Kit for Firebase is Google's machine learning toolkit for Android and iOS that provides on-device, cloud, and custom model serving capabilities using TensorFlow Lite.

UPDATE: Since June, 2020, Google offers ML Kit's on-device APIs through a new standalone SDK. Cloud APIs, AutoML Vision Edge, and custom model deployment will continue to be available via Firebase Machine Learning. Learn more

Resources

643 questions
0
votes
1 answer

iOS Firebase ML Kit Simple Audio Recognition "Failed to create a TFLite interpreter for the given model"

I have been trying to implement the Simple Audio Recognition Tensorflow sample in iOS using the Firebase's ML kit. I have successfully trained the model and converted it into a TFlite file. The model takes the Audio(wav) file path as input([String])…
Kautham Krishna
  • 967
  • 1
  • 14
  • 33
0
votes
0 answers

ListArray returns an empty list

While using List, it returns the value of the list within the function, but outside the scope of a function, it returns an empty list. public List face_list = new ArrayList<>(); public void mlkit(FirebaseVisionImage image) { …
Chinmay Shah
  • 247
  • 2
  • 10
0
votes
1 answer

Nativescript core cameraplus with Ml-kit problem taking photos

I'm using a NativeScript core OCR with cameraplus plugin and ml-kit from firebase. I have this code for the view:
CVO
  • 702
  • 1
  • 13
  • 31
0
votes
2 answers

Error "uninitialized classifier or invalid context" in tensoflow app demo

I build and run tensoflow app demo from: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/java/demo I replace model "mobilenet_quant_v1_224.tflite" by my custom model "optimized_graph.tflite" and label "labels.txt" by my custom…
mducc
  • 743
  • 1
  • 9
  • 32
0
votes
1 answer

Nativescript core convert imageAsset into Imagesource ML kit and camera interface

I'm doing an online version of the text recognition with Nativescript core and firebase ML kit with nativescript-camera plugin (I don't know if there's a better plugin for this) At the moment I have a buttom with this event: exports.onCapture =…
CVO
  • 702
  • 1
  • 13
  • 31
0
votes
1 answer

How to retrieve Firebase Task Results in a separate UI-Activity

I am currently experimenting with ML Kit and the local Firebase framework for detecting and analysing faces. I have a gallery activity where the user can choose an image and is directed to another activity where the selected image is displayed and…
bennyOoO
  • 358
  • 2
  • 11
0
votes
1 answer

Building compact subset of Vision model for MLKit

Vision model in MLKit's Cloud API has all the labels I need. In fact I need only 100 of its 10000 labels. Can we retrain a compact version of that model to detect only 100 labels and deploy it on the android device so the app can run without…
0
votes
1 answer

Error when adding Firebase to my iOS project without CocoaPods

I am trying to add Firebase to my project and especially their barcode reader frameworks (Vision). I have followed these instructions: https://www.mokacoding.com/blog/setting-up-firebase-without-cocoapods/ I have added the folder that contains what…
scourGINHO
  • 699
  • 2
  • 12
  • 31
0
votes
0 answers

Using Firebase/MLVisionLabelModel in iOS project generates console warnings

I have been recently messing around with the Firebase Beta ML Kit in my iOS project, and it's been quite an enjoyable experience so far. However, when I include the 'Firebase/MLVisionLabelModel' pod, I get tons of warnings that look like…
Jad Ghadry
  • 245
  • 2
  • 25
0
votes
0 answers

how to map face contours points on real time face, which we get from Firebase MLKit for iOS

how to map 2D Vision points array to create face map after getting vision points from faceDetector ?? faceDetector.process(visionImage) { features, error in guard error == nil, let features = features, !features.isEmpty else { return …
Sumeet.Jain
  • 1,533
  • 9
  • 26
0
votes
1 answer

ML Kit FaceDetectionProcessor not detecting the ears landmarks

Issues details I tried the ML Kit Face detection sample app from here but was not able to receive landmark data for the ears while running the LiveDataPreviewActivity. The call face.getLandmark always returns null for…
Marian
  • 1
  • 3
0
votes
0 answers

FirebaseML Vision not working well with image captured using device camera

I am referring to sample code [https://codelabs.developers.google.com/codelabs/mlkit-ios/#0 give here to detect text in image. If I run this code with my invoice image (scanned document) it works fine. But when I go and capture image of invoice…
Pooja M. Bohora
  • 1,311
  • 1
  • 14
  • 42
0
votes
1 answer

How to check if a scanned document contains an address

I need to scan documents and check if it contains specific data. To put it "simply", assume I need to find if a scanned invoice contains a specific address. The given address to search could be written in different ways compared to how it's written…
Not Important
  • 762
  • 6
  • 22
0
votes
1 answer

Is there a way to take highest confidence result only in VisionLabelDetector?

To archive this, I was thinking about take 1 result only which is the top one so I check the documentation and VisionCloudDetectorOptions have this variable maxResults so if I set it to 1, my goal is complete but this only work with Cloud-based…
Quang Huy
  • 59
  • 11
0
votes
3 answers

Android: Getting Error:Execution failed for task ':app:processDebugGoogleServices' after adding a new dependency

Full error trace: Error:Execution failed for task ':app:processDebugGoogleServices'. > Please fix the version conflict either by updating the version of the google-services plugin (information about the latest version is available at…