0

I'm currently developing some computer-vision application for blind people. Now we decided to move our studies on mobile apps.

Since OpenCV 2.4, building them for iOS is quite simple due to the two scripts.

The most problem I'm facing is time of processing. I read that most of image processing on iPhone is done with OpenGL. So I was wondering if could be a possibility to build OpenCV with OpenGL support when builded for iOS.

The processing time for very simple operation in OpenCV (on iPhone) is too long for real-time apps, especially if dedicated to blind people who need a rapid feed-back of reality.

Could someone help me?

genpfault
  • 51,148
  • 11
  • 85
  • 139
Altair Jones
  • 834
  • 1
  • 7
  • 20

1 Answers1

4

It is not possible to build OpenCV with OpenGL support for iOS (and OS X). It is unconditionally disabled by the OpenCV's build scripts.

Actually OpenGl is not used in OpenCV for acceleration. So you will not get any speedup even after editing the sources.

Andrey Kamaev
  • 29,582
  • 6
  • 94
  • 88
  • Uhm... unfortunately I supposed that... So... can you suggest me other ways to process images? I suppose that I should code my self Cannys, Houghs, Cascade classifiers... – Altair Jones Jun 16 '12 at 17:23
  • Mmmm... sounds terrific :) Does all computer vision apps for iOS code themselves CV functions and operators? – Altair Jones Jun 16 '12 at 17:23
  • I recently had to do 'computer vision' on iOS. I did some prototyping on PC with OpenCV, then extracted portions of OpenCV's source code to remove useless generalities of the algorithms I used, along with GCD parallelization. I had a nice speedup in performance without rewriting too many wheels. – rotoglup Jun 16 '12 at 17:28