0

I am a hobbiest so excuse my question if it might be too basic.

For a test I am currently trying to recreate a camera that I created in 3D looking at a plane with 4 variable points. The camera has a value of:

Tx 53

TY 28

Tz 69

Rx -5

Ry 42

Rz 0

with a focal length of 100

  1. My first question here is. In Maya the up axis is Y, is this the same when calculating with openCV or is the up axis Z?

The look through the camera is the following:

look through sourceCamera

The image I want to create the camera from looks like this and is called "cameraView.jpg":

the camera view rendered in 2048x2048

This is the code I want to recreate the camera with:


import cv2
import numpy as np
import math

def focalLength_to_camera_matrix(focalLenght, image_size):
    w, h = image_size[0], image_size[1]
    K = np.array([
        [focalLenght, 0, w/2],
        [0, focalLenght, h/2],
        [0, 0, 1],
    ])
    return K

def rot_params_rv(rvecs):
    R = cv2.Rodrigues(rvecs)[0]
    roll = 180*math.atan2(-R[2][1], R[2][2])/math.pi
    pitch = 180*math.asin(R[2][0])/math.pi
    yaw = 180*math.atan2(-R[1][0], R[0][0])/math.pi
    rot_params= [roll,pitch,yaw]
    return rot_params

# Read Image
im = cv2.imread("assets/cameraView.jpg");
size = im.shape

imageWidth = size[1]
imageHeight = size[0]
imageSize = [imageWidth, imageHeight]

points_2D = np.array([
                            (750.393882, 583.560379),     
                            (1409.44155, 593.845944),   
                            (788.196876, 1289.485585),      
                            (1136.729733, 1317.203244)     
                        ], dtype="double")


points_3D = np.array([
                            (-4.220791, 25.050909, 9.404016),     
                            (4.163141, 25.163363, 9.5773660),     
                            (-2.268313, 18.471558, 10.948839), 
                            (2.109119, 18.56548, 10.945459)      
                        ])


focal_length = 100

cameraMatrix = focalLength_to_camera_matrix(focal_length, imageSize)
distCoeffs = np.zeros((5,1))

success, rvecs, tvecs = cv2.solvePnP(points_3D, points_2D, cameraMatrix, distCoeffs, flags=0)

rot_vals = rot_params_rv(rvecs)

print("Transformation Vectors")
print (tvecs)
print("")
print("Rotation Vectors")
print (rvecs)
print("")
print("Rotation Values")
print (rot_vals)
print("")

I am still confused in how I would get the correct Rotation and Transformation Values from the Vectors I got from cv2.solvePnP. I looked up the problem and got to the rot_params_rv(rvecs) function that somebody posted here. But it is not giving me the correct camera Position.

  1. So my second question would be how I could get the correct Rotation and Transformation Values from the Rotation and Transformation Vectors.

Am I missing a step?

When putting in the values as a new camera in my 3D application it looks like this:

The green is the CameraPosition I get from solvePnp and probably the wrong vector transform, the Camera on the right is the correct position how it should be.

  1. My third question is where I could maybe take a look into the solvePnP function?(best in python) because right now it is a bit of a black box to me I am afraid.

Thank you very much for helping me.

Javad
  • 2,033
  • 3
  • 13
  • 23
nasun
  • 1
  • 2
  • ok, I have found the solution for the camera world position: rmat = cv2.Rodrigues(rvecs)[0] camera_position = -np.matrix(rmat).T * np.matrix(tvecs) will give the correct world position from the camera position – nasun Oct 07 '22 at 10:51
  • also focal length seems has to be focal length in pixel not mm. so to get that we could do: def focalMM_to_focalPixel( focalMM, pixelPitch ): f = focalMM / pixelPitch return f – nasun Oct 07 '22 at 10:53
  • pixelPitch for my maya Camera seems to be: 0.01171874865566988 but please dont ask me why – nasun Oct 07 '22 at 10:53
  • nooooo! matrix multiplication is _not_ asterisk (`*`). it is either np.dot or np.matmul or the infix `@` operator – Christoph Rackwitz Oct 07 '22 at 15:47
  • interesting. thanks for sharing. though I am getting the exact same result using * or @ – nasun Oct 07 '22 at 16:18
  • Thank god, I found out how to work the euler transformations for maya: r = Rotation.from_rotvec([rvecs[0][0],rvecs[1][0],rvecs[2][0]]) rot = r.as_euler('xyz', degrees=True) rx = round(180-rot[0],5) ry = round(rot[1],5) rz = round(rot[2],5) – nasun Oct 07 '22 at 17:44
  • in maya all you need to do is : proc relativeEulerRotation(string $object, float $rotations[]){ setAttr ($object + ".rotateZ") 0; setAttr ($object + ".rotateX") 0; setAttr ($object + ".rotateY") 0; rotate -r -ws -fo 0 0 $rotations[2] $object; rotate -r -ws -fo 0 $rotations[1] 0 $object; rotate -r -ws -fo $rotations[0] 0 0 $object; } where your rotations are the euler rotations from a camera that is 0 0 0 – nasun Oct 07 '22 at 17:45

1 Answers1

0

Ok I found my own answers and I will post my solution for anybody who is also stuck:

import cv2
import numpy as np
import math
from scipy.spatial.transform import Rotation

def focalMM_to_focalPixel( focalMM, pixelPitch ):
    f = focalMM / pixelPitch
    return f

# Read Image
im = cv2.imread("assets/cameraView.jpg");
size = im.shape

imageWidth = size[1]
imageHeight = size[0]
imageSize = [imageWidth, imageHeight]

points_2D = np.array([
                            (750.393882, 583.560379),     
                            (1409.44155, 593.845944),   
                            (788.196876, 1289.485585),      
                            (1136.729733, 1317.203244)     
                        ], dtype="double")


points_3D = np.array([
                            (-4.220791, 25.050909, 9.404016),     
                            (4.163141, 25.163363, 9.5773660),     
                            (-2.268313, 18.471558, 10.948839), 
                            (2.109119, 18.56548, 10.945459)      
                        ])

focalLengthMM = 100
pixelPitch = 0.01171874865566988

fLength = focalMM_to_focalPixel( focalLengthMM, pixelPitch )
print("focalLengthPixel", fLength)

K = np.array([(fLength, 0, imageWidth/2),     
               (0, fLength, imageHeight/2),     
               (0, 0, 1)])
distCoeffs = np.zeros((5,1))

success, rvecs, tvecs = cv2.solvePnP(points_3D, points_2D, K, distCoeffs, flags=cv2.SOLVEPNP_ITERATIVE)

np_rodrigues = np.asarray(rvecs[:,:],np.float64)
rmat = cv2.Rodrigues(np_rodrigues)[0]
camera_position = -np.matrix(rmat).T @ np.matrix(tvecs)

#Test the solvePnP by projecting the 3D Points to camera
projPoints = cv2.projectPoints(points_3D, rvecs, tvecs, K, distCoeffs)[0]

for p in points_2D:
 cv2.circle(im, (int(p[0]), int(p[1])), 3, (0,255,0), -1)

for p in projPoints:
 cv2.circle(im, (int(p[0][0]), int(p[0][1])), 3, (255,0,0), -1)

cv2.imshow("image", im)
cv2.waitKey(0)

r = Rotation.from_rotvec([rvecs[0][0],rvecs[1][0],rvecs[2][0]])
rot = r.as_euler('xyz', degrees=True)

tx = camera_position[0][0]
ty = camera_position[1][0]
tz = camera_position[2][0]

rx = round(180-rot[0],5) 
ry = round(rot[1],5) 
rz = round(rot[2],5) 
nasun
  • 1
  • 2