0

I have an object with some sensors on it with a known 3d location in a fixed orientation in relation to each other. Let's call this object the "detector". I have a few of these sensors' detected locations in 3D world space. Problem: how do I get an estimated pose (position and rotation) of the "detector" in 3D world space.

I tried looking into the npn problem, flann and orb matching and knn for the outliers, but it seems like they all expect a camera position of some sort. I have nothing to do with a camera and all I want is the pose of the "detector". considering that opencv is a "vision" library, do I even need opencv for this?

edit: Not all sensors might be detectec. Here indicated by the light-green dots. 3D pose estimation

Community
  • 1
  • 1
Hacky
  • 305
  • 4
  • 15
  • "I have nothing to do with a camera" - You're question is not clear. By what means do you want to get a 3D pose estimation (if it's not by a camera)? – Elouarn Laine May 16 '17 at 12:20
  • I want to fit the cube with the red dots in the cube with the green dots. There is no camera involved. – Hacky May 16 '17 at 12:27
  • If you can provide the normals of each point. Then I recommend you to have a look at the module named [surface_matching](https://github.com/opencv/opencv_contrib/tree/master/modules/surface_matching) and more specifically the [ppf_match_3d](http://docs.opencv.org/3.2.0/db/d25/classcv_1_1ppf__match__3d_1_1PPF3DDetector.html) class. – Elouarn Laine May 16 '17 at 12:36
  • I have looked into that (pointcloud matching),but it does not take the ID's into account. I also can not provide normals. It's just a point. – Hacky May 16 '17 at 13:49

2 Answers2

3

Sorry this is kinda old but never too late to do object tracking :)

OpenCV SolvePNP RANSAC should work for you. You don't need to supply an initial pose, just make empty Mats for rvec and tvec to hold your results.

Also because there is no camera just use the Identity for the camera distortion arguments.

The first time calling PNP with empty rvec and tvec make sure to set useExtrinsicGuess = false. Save your results if good, use them for next frame with useExtrinsicGuess = true so it can optimize the function more quickly. https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#solvepnpransac

  • cool! I knew that opencv had something for this! The ExtrinsicGuess is a nice bonus if not much changes between frames. Thanks Kyle – Hacky Dec 12 '18 at 13:14
1

You definitely do not need openCV to estimate the position of your object in space.

This is a simple optimization problem where you need to minimize a distance to the model.

First, you need to create a model of your object's attitude in space.

def Detector([x, y, z], [alpha, beta, gamma]):

which should return a list or array of all positions of your points with IDs in 3D space. You can even create a class for each of these sensor points, and a class for the whole object which has as attributes as many sensors as you have on your object.

Then you need to build an optimization algorithm for fitting your model on the detected data. The algorithm should use the attitude x, y, z, alpha, beta, gamma as variables.

for the objective function, you can use something like a sum of distances to corresponding IDs.

Let's say you have a 3 points object that you want to fit on 3 data points

#Model
m1 = [x1, y1, z1]
m2 = [x2, y2, z2]
m3 = [x3, y3, z3]

#Data
p1 = [xp1, yp1, zp1]
p2 = [xp2, yp2, zp2]
p3 = [xp3, yp3, zp3]

import numpy as np
def distanceL2(pt1, pt2):
    distance = np.sqrt((pt1[0]-pt2[0])**2 + (pt1[1]-pt2[1])**2 + (pt1[2]-pt2[2])**2))

# You already know you want to relate "1"s, "2"s and "3"s
obj_function = distance(m1, p1) + distance(m2,p2) + distance(m3,p3)

Now you need to dig into optimization libraries for finding the best algorithm to use, depending on how fast you need you optimization to be. Since your points in space are virtually connected, this should not be too difficult. scipy.optimize could do it.

To reduce the dimensions of your problem, Try taking one of the 'detected' points as reference (as if this measure was to be trusted) and then find the minimum of the obj_function for this position (there are only 3 parameters left to optimize on, corresponding to orientation) and then iterate for each of the points you have. Once you have the optimum, you can try to look for a better position for this sensor around it and see if you reduce again the distance.

  • Thank you Quentin. You have very neatly broken down the way for me to go, but the essential part where the actual matching is done is still to be built. I have been thinking about making it easier by first fitting one of the points and then rotating from there. Still looks like a lot of work though... Maybe in many iterations I can get it somewhat right, but isn't opencv very good at things like this? I guess the FLANN matcher does things like this? – Hacky May 16 '17 at 13:23
  • I have never used openCV with a camera, but it seems to me that in this case, openCV would work a lot like a black box. I am absolutely sure there is a 'attitude detection algorithm' implemented in it somewhere, but what you want to do is sure easier to find elsewhere – Quentin Brzustowski May 16 '17 at 14:04
  • I'll try your suggestion. Thanks for your time! – Hacky May 16 '17 at 20:49
  • I understand the objective: it sums the total error and tells me how mch the current iteration is off. I already had classes for all elements like the detector, the sensors and the detected sensor readings. I only don't see which optimize function to use and how this function knows that is has to do with a position and attitude of a parent object. – Hacky May 18 '17 at 10:12
  • Do you have any requirements on performance ? Mainly how much time it could take ? – Quentin Brzustowski May 18 '17 at 12:14
  • Let's assume you have chosen point A as a reference for your object model, so you want to compute only 3 angles. `Init_detector = detector(x,y,z,0,0,0) min_dist = dist(Init_detector, object)` Then use a large discretisation of the space to avoid too long computation `for alpha in range(1,360,10): for beta in range(1,360,10): for gamma in range(1,360,10):` ` model_obj = Detector(x,y,z,alpha,beta,gamma) dist = distance(model_obj, object) if dist < dist_min: dist_min = dist` – Quentin Brzustowski May 18 '17 at 12:22
  • You can write the optimization by yourself, without using one from the library – Quentin Brzustowski May 18 '17 at 12:24
  • Nice Quentin. That looks like a cool bruteforce method from where I can optimize. Maybe in a few iterations like you mentioned. – Hacky May 19 '17 at 09:09