I want to use MLKIT (for text recognition) of firebase in my iOS app. I have download and test sample app. But when I use core ml with my custom project where I capture an image from iPhone camera and use ML Kit function call on this image the MLKit will not showing me any result. Can you please tell me why it is working with default images but failed to generate result while using captured image?
-
What's the error? – Saurabh Jain Jul 13 '18 at 06:35
-
Getting no error but it is not recognising any text. I capture an image and use MLKit's find text it recognise nothing. I capture same image save this image to my project folder and check for text recognising call it show me the recognised text. – thousand Jul 13 '18 at 08:33
-
I also save this image to my iphone's camera roll and then tried to pick image from library and the text recognition work perfectly with this. But I am not able to find why it is not working with camera clicked image. Does Firebase MLKit use any specific format or image size to recognize the text or anything else. – thousand Jul 13 '18 at 10:03
-
Okay its done. :) Find the error and fixed it! – thousand Jul 16 '18 at 01:17
-
Wow, Congrats.. Happy coding! – Saurabh Jain Jul 16 '18 at 11:59
-
1What was the issue you were facing? – anoop4real Jul 17 '18 at 10:24
-
This can be the orientation issue. Fix : https://stackoverflow.com/a/58013149/1522584 – Abhijith Sep 19 '19 at 14:21
2 Answers
I was also facing issues with Text detection using MLKit
in iOS. I have integrated the sdks as per the document but texts were detected wrongly and bad results were coming out, I was taking photo using iPhone 6s.
I then realized that some processing is really required, finally I found the code from Google samples implement the below highlighted method name from the sample, you need to resize and scale the image according to the image view, after adding the code, it started detecting properly.
(ps: I am not sure whether I can post the code here as it is in Google repo, so giving the link)
private func updateImageView(with image: UIImage)

- 7,598
- 4
- 53
- 56
It can be image orientation issue. Fix the orientation before creating VisionImage. This worked for me.
let fixedImage = pickedImage.fixImageOrientation()
Add this extension.
extension UIImage {
func fixImageOrientation() -> UIImage {
UIGraphicsBeginImageContext(self.size)
self.draw(at: .zero)
let fixedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return fixedImage ?? self
} }
Also another solution is that you can provide the orientation as metadata to the vision image as shown in this link

- 3,094
- 1
- 33
- 36