I'm searching for a library similar to ARToolKit. It needs to support processing a single image and search that image for a marker. If it finds one I want it to return the camera angle / viewpoint based of the marker. I couldn't find anything via Google, does something like this exist?
-
How do you know you are a real geek? The whole world’s attention is in South Africa witnessing the kick-off of the final match of the biggest footballing event, and you are posting questions on StackOverflow at that very minute. I know, I know. You broke your vuvuzela. – Jul 11 '10 at 18:34
-
@Josto: Are you talking about that sports event that seems to take place at the moment? That's about football? Good to know when talking with people somewhere else than SO. – Nikolai Ruhe Jul 11 '10 at 19:26
-
What keeps you from using ARToolKit? – Nikolai Ruhe Jul 11 '10 at 19:26
-
@Nikolai: It doesn't support processing a single image and the project is kinda dead. – Robin Jul 11 '10 at 19:54
1 Answers
I'm answering this so anyone stumbling across this problem / question doesn't have to do the same research I did.
Apparently, accessing and processing a single image doesn't quite fit the definition of augmented reality. There is another keyword for this which I already forgot (sorry), but in case you want to use google you should not focus on AR-related software only.
To solve my issue I used two approaches. The first one was to use the ARToolKit together with gstreamer and ffmpeg. I transformed my single-image into one second of video with ffmpeg, exported the ARTOOLKIT_CONFIG string and then rendered the image with ARToolKit. This wasn't really great as this is very limited and I can't really get the rendered image back to me without parsing the artoolkit-opengl output.
My second approach however satisfied me very much: I used the OpenCV-Library to detect a marker. An example for something like that can be found here: http://dasl.mem.drexel.edu/~noahKuntz/openCVTut10.html I then exported the recognized camera-parameters and transformed them to fit the matrix's used by the Irrlich-Engine (I also tried Ogre but Irrlicht seemed nicer to me) and then rendered my object with it onto the image. I can then obtain the final result with Irrlicht's transformDataToImage() function.
The only downside is I have to have an X-Server running to get the rendered picture but I can live with that.
Ps: Don't try this with the square marker found in the tutorial I mentioned above. It isn't possible to detect the exact rotation of the marker out of obvious reasons.

- 1,322
- 1
- 13
- 23
-
I would really appreciate it if you could point out how you transformed the camera parameters to fit Irrlicht's matrices. How would you construct the projective matrix (used in `camera->setProjectionMatrix()`) considering the fact that Irrlicht uses a left-handed coordinate system? Thanks. – Dragos Stanciu Mar 17 '13 at 19:40