2

I got two point clouds and try to scale them to the same size. My first approach was to just divide the square roots from the eigenvalues:

pcl::PCA<pcl::PointNormal> pca;
pca.setInputCloud(model_cloud_ptr);
Eigen::Vector3f ev_M = pca.getEigenValues();

pca.setInputCloud(segmented_cloud_ptr);
Eigen::Vector3f ev_S = pca.getEigenValues();

double s = sqrt(ev_M[0])/sqrt(ev_S[0]);

This helps me to scale my model cloud to have approximately the same size as my segmented cloud. But the result is really not that perfect. It is a simple estimation. I tried doing it with TransformationEstimationSVDScale and also with SampleConsensusModelRegistration like in this tutorial. But when doing this I get the message, that the number of source points/indices differs from the number of target points/indices.

What would be the best approach for me to scale the clouds to the same size, when having different numbers of points in them?

Edit I tried doing what @dspeyer proposed but this gives me a scaling factor of almost 1.0

pcl::PCA<pcl::PointNormal> pca;
pca.setInputCloud(model_cloud_ptr);
Eigen::Matrix3f ev_M = pca.getEigenVectors();
Eigen::Vector3f ev_M1 = ev_M.col(0);
Eigen::Vector3f ev_M2 = ev_M.col(1);

auto dist_M1 = ev_M1.maxCoeff()-ev_M1.minCoeff();
auto dist_M2 = ev_M2.maxCoeff()-ev_M2.minCoeff();  
auto distM_max = std::max(dist_M1, dist_M2);

pca.setInputCloud(segmented_cloud_ptr);
Eigen::Matrix3f ev_S = pca.getEigenVectors();
Eigen::Vector3f ev_S1 = ev_S.col(0);
Eigen::Vector3f ev_S2 = ev_S.col(1);

auto dist_S1 = ev_S1.maxCoeff()-ev_S1.minCoeff();
auto dist_S2 = ev_S2.maxCoeff()-ev_S2.minCoeff();
auto distS_max = std::max(dist_S1, dist_S2);

double s = distS_max / distM_max;
progNewbie
  • 4,362
  • 9
  • 48
  • 107
  • I guess you already tried to simply calculate variances of both clouds and equalize them – Damien Dec 21 '19 at 13:33
  • I guess the eigenvalue method will work if the clouds are properly aligned and somewhat similar in shape. How about calculating the extrema of both clouds, then add 8 corner points as in (min,min,min) (max,min, min).. etc to each of them? – StarShine Dec 23 '19 at 14:02
  • Do you have a precise definition of "size"? Something like "same bounding box after PCAing down to 2d" or "68% of points within same radius ball"? – dspeyer Dec 25 '19 at 08:59
  • *The result is really not that perfect*. A figure will help up understanding in what sense it is not perfect – Damien Dec 25 '19 at 09:24
  • @dspeyer your first definition should fit my problem. – progNewbie Dec 25 '19 at 19:06
  • @Damien I will update my question tomorrow evening with a figure showing it. – progNewbie Dec 25 '19 at 19:07
  • @StarShine I will try doing this. Thanks for this input. – progNewbie Dec 25 '19 at 22:05

2 Answers2

2

Seems like you should be able to:

  • Project everything onto the first two eigenvectors
  • Take the min and max for each
  • Subtract max-min for each eigenvector/dataset pair
  • Take the max of the two ranges (usually, but not always the first eigenvector -- when it isn't you'll want to rotate the final display)
  • Use the ratio of those maxes as the scaling constant
dspeyer
  • 2,904
  • 1
  • 18
  • 24
  • What exactly do you mean with 'dataset pair'? Would it be possilbe to provide a simple example for the whole algorithm so I can better understand the process? I really appreciate your help. – progNewbie Dec 26 '19 at 13:12
  • There are two datasets ("model" and "segmented") and from each of them you take the first two eigenvectors. This gives four sets of projections (modelE1, modelE2, segmentedE1, segmentedE2). For each of these, compute max-min. – dspeyer Dec 27 '19 at 18:50
  • Thanks. And "take the max of the two ranges"? When I subtract max-min for every set, I would get four scalars. What do you mean with 'two ranges'? – progNewbie Dec 27 '19 at 19:18
  • for each cloud, compute max(range[that cloud][eigenvector1], range[that cloud][eigenvector2]) – dspeyer Dec 27 '19 at 23:37
  • When doing this, I get somehow almost the same values for both clouds, resulting in a scaling factor very near to 1. See my edit in my question for my code. – progNewbie Dec 28 '19 at 19:31
  • Are you sure you're projecting the clouds into the eigenvectors? I don't know this library, but I didn't see that in your code. (Take the norm after projecting, if that wasn't obvious) – dspeyer Dec 29 '19 at 21:57
2

I would suggest using eigenvectors of each cloud to identify each ones primary axis of variation and then scaling them based on each clouds variation in that axis. In my example I used an oriented bounding box (max min in eigenspace), but mean value or standard deviation in the primary axis (x axis in eigenspace) could be better metrics depending on the application.

I left some debug flags in the function in case they are helpful to you, but gave them the defaults that I expect you will use. I tested for variable axis stretching and variable rotations of sample and golden clouds. This function should be able to handle that all just fine.

One caveat of this method is that if warping is axially variable AND warping causes one axis to overcome another axis as primary axis of variation, this function could improperly scale the clouds. I am not sure if this edge case is relevant to you. As long as you have uniform scaling between your clouds, this case should never occur.

debugFlags: debugOverlay will leave both input clouds scaled and in their respective eigen orientations (allows more easy comparison). primaryAxisOnly will use only the primary axis of variation to perform scaling if true, if false, it will scale all 3 axes of variation independently.

Function:

void rescaleClouds(pcl::PointCloud<pcl::PointXYZ>::Ptr& goldenCloud, pcl::PointCloud<pcl::PointXYZ>::Ptr& sampleCloud, bool debugOverlay = false, bool primaryAxisOnly = true)
{
    //analyze golden cloud
    pcl::PCA<pcl::PointXYZ> pcaGolden;
    pcaGolden.setInputCloud(goldenCloud);
    Eigen::Matrix3f goldenEVs_Dir = pcaGolden.getEigenVectors();
    Eigen::Vector4f goldenMidPt = pcaGolden.getMean();
    Eigen::Matrix4f goldenTransform = Eigen::Matrix4f::Identity();
    goldenTransform.block<3, 3>(0, 0) = goldenEVs_Dir;
    goldenTransform.block<4, 1>(0, 3) = goldenMidPt;
    pcl::PointCloud<pcl::PointXYZ>::Ptr orientedGolden(new pcl::PointCloud<pcl::PointXYZ>);
    pcl::transformPointCloud(*goldenCloud, *orientedGolden, goldenTransform.inverse());
    pcl::PointXYZ goldenMin, goldenMax;
    pcl::getMinMax3D(*orientedGolden, goldenMin, goldenMax);

    //analyze sample cloud
    pcl::PCA<pcl::PointXYZ> pcaSample;
    pcaSample.setInputCloud(sampleCloud);
    Eigen::Matrix3f sampleEVs_Dir = pcaSample.getEigenVectors();
    Eigen::Vector4f sampleMidPt = pcaSample.getMean();
    Eigen::Matrix4f sampleTransform = Eigen::Matrix4f::Identity();
    sampleTransform.block<3, 3>(0, 0) = sampleEVs_Dir;
    sampleTransform.block<4, 1>(0, 3) = sampleMidPt;
    pcl::PointCloud<pcl::PointXYZ>::Ptr orientedSample(new pcl::PointCloud<pcl::PointXYZ>);
    pcl::transformPointCloud(*sampleCloud, *orientedSample, sampleTransform.inverse());
    pcl::PointXYZ sampleMin, sampleMax;
    pcl::getMinMax3D(*orientedSample, sampleMin, sampleMax);

    //apply scaling to oriented sample cloud 
    double xScale = (sampleMax.x - sampleMin.x) / (goldenMax.x - goldenMin.x);
    double yScale = (sampleMax.y - sampleMin.y) / (goldenMax.y - goldenMin.y);
    double zScale = (sampleMax.z - sampleMin.z) / (goldenMax.z - goldenMin.z);

    if (primaryAxisOnly) { std::cout << "scale: " << xScale << std::endl; }
    else { std::cout << "xScale: " << xScale << "yScale: " << yScale << "zScale: " << zScale << std::endl; }


    for (int i = 0; i < orientedSample->points.size(); i++)
    {
        if (primaryAxisOnly)
        {
            orientedSample->points[i].x = orientedSample->points[i].x / xScale;
            orientedSample->points[i].y = orientedSample->points[i].y / xScale;
            orientedSample->points[i].z = orientedSample->points[i].z / xScale;
        }
        else
        {
            orientedSample->points[i].x = orientedSample->points[i].x / xScale;
            orientedSample->points[i].y = orientedSample->points[i].y / yScale;
            orientedSample->points[i].z = orientedSample->points[i].z / zScale;
        }
    }
    //depending on your next step, it may be reasonable to leave this cloud at its eigen orientation, but this transformation will allow this function to scale in place.

    if (debugOverlay)
    {
        goldenCloud = orientedGolden;
        sampleCloud = orientedSample;
    }
    else
    {
        pcl::transformPointCloud(*orientedSample, *sampleCloud, sampleTransform);
    }
}

Test Code (you will need your own clouds and visualizers):

pcl::PointCloud<pcl::PointXYZ>::Ptr golden(new pcl::PointCloud<pcl::PointXYZ>);
fileIO::loadFromPCD(golden, "CT_Scan_Nov_7_fullSpine.pcd");
CloudVis::simpleVis(golden);

double xStretch = 1.75;
double yStretch = 1.65;
double zStretch = 1.5;
pcl::PointCloud<pcl::PointXYZ>::Ptr stretched(new pcl::PointCloud<pcl::PointXYZ>);
for (int i = 0; i < golden->points.size(); i++)
{
    pcl::PointXYZ pt = golden->points[i];
    stretched->points.push_back(pcl::PointXYZ(pt.x * xStretch, pt.y * yStretch, pt.z * zStretch));
}
Eigen::Affine3f arbRotation = Eigen::Affine3f::Identity();
arbRotation.rotate(Eigen::AngleAxisf(M_PI / 4.0, Eigen::Vector3f::UnitY()));
pcl::transformPointCloud(*stretched, *stretched, arbRotation);

CloudVis::rgbClusterVis(golden, stretched);

rescaleClouds(golden, stretched,true,false);
CloudVis::rgbClusterVis(golden, stretched);
Sneaky Polar Bear
  • 1,611
  • 2
  • 17
  • 29
  • Wow awesome. Thank you so much for your effort. You helped me big time. Also for improving my understanding in this whole topic. – progNewbie Jan 14 '20 at 11:47
  • I got one more question on this: you take `xScale` to scale the cloud in the end. Why is this so? Shouldn't I take the maximum form xScale, yScale, zScale and scale it with that? (for axis-aligned) – progNewbie Jan 14 '20 at 12:00
  • My concern is with objects or targets that have 2 or more similar axes of variation. In this case, if the axes get misaligned due to warping, the individuated axis warping may exacerbate the issue (by mistakenly scaling the wrong two axes against one another). By only considering the primary axis of variation, may still associate wrong target axes (if highly symetric object), but will never distort the object. – Sneaky Polar Bear Jan 14 '20 at 15:42
  • So if you had an old house phone as your target. One axis (long axis) is clearly primary and will almost certainly get matched. The other two axes may be pretty close in terms of variation (but lets say x is slightly greater than y). Now in your capture method the cloud is slightly warped such that on your unscaled target y is slightly greater than x in variation. Now the scaling function could line x with y and y with x and then scale them as such, which would be wrong and would exacerbate the warping. – Sneaky Polar Bear Jan 14 '20 at 15:45
  • If you just used x in this phone example, there is basically no chance that another axis would overcome the z as primary axis of variation, so up to an extreme alteration of your target cloud, it would still be scaled *pretty well* to your golden – Sneaky Polar Bear Jan 14 '20 at 15:46
  • Thank you! I understand the problem! I got one more question about the procedure. I don't fully understand how the transformation-Matrix is composed. You put the eigenvectors in the first 3x3 part of the matrix and in the last line you put the mean of the cloudpoints. Then invert it. Why is that so? Don't I just need to use the Eigenvector with the biggest eigenvalue (which then should be the main pricipal or primary direction, right?) and divide this one to the same thing at the other cloud to calculate the scale? – progNewbie Feb 18 '20 at 15:49
  • And in the end it scales the `orientedSample` but isn't this cloud transformed by `sampleTransform`? Why can I do this. Why don't I have to use sampleCloud and scale it? – progNewbie Feb 18 '20 at 15:51
  • The transformation matrix is composed of a rotation matrix in the upper left (a 3x3 which in this case is built from eigenvectors which are always going to be mutually orthogonal), and the right collumn is the translation (in this case set to the midpoint of the cloud). I invert it so that I can transform the cloud into its eigen space (ie midpoint at origin and primary axes of variation aligned with cartesian axes). This allows simple min max binning of x y and z which generates an oriented bounding box – Sneaky Polar Bear Feb 18 '20 at 16:20
  • U could scale sample cloud directly if you wanted to apply off axis scaling (ie scale along the eigenvectors). But I find it more intuitive to scale in eigen space and then move it back into its original cartesian space. Again the reason for scaling while at origin is so that you can just scale along the cartesian axes (similar to the reason for doing a bounding box while in eigen orientation). – Sneaky Polar Bear Feb 18 '20 at 16:23
  • Thank you! I understand the composed matrix now. Oh I totally get why I should scale it when having the cloud still in the eigenspace. But to have the sampleCloud back in it's original position (but scaled) I would need to undo the transformation done into eigenspace earlier correct? Another Question: Why is `x` the primary axis? I just want to scale the object on primary axis. And it works fine by using `xScale`. I just don't understand why x is always the primary axis. – progNewbie Feb 18 '20 at 16:41
  • its because the first column in a rotation matrix is the x axis of the frame. So by putting the eigenvector with the largest eigenvalue into that first column, scaling on x in eigenspace will scale along the primary axis of variation. – Sneaky Polar Bear Feb 18 '20 at 20:18
  • Why is the first column in the rotation matrix the eigenvector with the largest eigenvalue? Are they sorted when calling `getEigenVectors()`? – progNewbie Feb 19 '20 at 00:28
  • 1
    Thanks, that makes sense then. – progNewbie Feb 19 '20 at 14:29
  • `getEigenVectors()` gives me the eigen vectors of the covariance matrix of the cloud right? So the PCA computes the co-variance matrix right? – progNewbie Feb 22 '20 at 14:59
  • One other question: Why is .getMean() returning a Vector4f and not a Vector4f? – progNewbie Mar 08 '20 at 21:07
  • when performing vector math with 3 d entities on a 4x4 matrix, an extra slot (which is always 1 or 0 depending on the application) is added to allow the matrix math of a 3d entity with a 4d matrix. – Sneaky Polar Bear Mar 08 '20 at 23:31