0

I am building a React Native app that uses OpenCV under the hood. What I want to do is project an image over the phone's camera feed onto a frame. I have the four points to calculate cv::getPerspectiveTransform with and could simply do a cv::warpPerspective to overlay an image.

However, I would much rather get the transformation data and overlay an image in React Native land using the StyleSheet transform property (I also want to do some general logic there too). From my understanding, I can get 3D translation and rotation data through some decomposition. But this requires a camera matrix, and I don't want my user's to have to go through the calibration steps.

Is there some way of getting this information that would prevent my users from having to do the calibration themselves? Or am I doomed to make it a fun little step in the whole process?


Note: For clarification, I am asking how to get the data not how to send that data to React.

MrGVSV
  • 127
  • 2
  • 7
  • for warping a 2D image, it's just 3*3 transform matrix, I think no camera calibration is needed. And could you provid more specific code, environment and other things might help understanding the problem? – flankechen Aug 27 '20 at 06:52
  • @flankechen I'm trying to get individual components from the transform matrix (i.e. scale, rotation, etc.) so that I can pass it into StyleSheet like `transform: [{rotationX:#},...]`. I'm not sure what code would be best to show since I'm asking more about general alternatives rather than code examples. But I can add anything that might help. – MrGVSV Aug 27 '20 at 17:23
  • for 2d case, no camera matrix is needed, all information is in the 3*3 transform matrix.did you mean the rotation in euler angle, scale and translation? you can get it from the 3*3 matrix directly. https://math.stackexchange.com/questions/13150/extracting-rotation-scale-values-from-2d-transformation-matrix/13165 – flankechen Aug 28 '20 at 02:54

0 Answers0