0

I have an app which uses Tesseract for OCR. Till now I used to have a manual cropping option of image and then pass the cropped image taken from camera to Tesseract. Now in iOS8 there is CIDetector,using that I am detecting rectangle and passing this to tesseract .

****Problem ***

The problem here is when I pass this cropped image to tesseract its not reading image properly.

I know the reason for inacurracy with tesseract is the resolution/scale of the cropped image .

There are couple of things I am unclear about:-

  1. The cropped image is CIImage and I converted it to UIImage,when I see the size of that image its very low(320*468) which was not the case im my prev implemetation,camera image used to be more than 3000*2000 in size . Is this lossing its scale while conversion from CIImage to UIImage ?

  2. Or is the problem because I am picking the image differently and not taking a picture with camera ?

I have followed this link for live detection :- Link

Abhinandan Sahgal
  • 1,076
  • 1
  • 13
  • 27

1 Answers1

0

The detector mentioned in the article does not return a rectangle it returns 4 points which you need to run through a CIFilter "CIPerspectiveCorrection" the output of the CIFilter can then be used by tesseract

Codger
  • 1
  • 1