2

I am trying to convert point clouds sampled and stored in XYZij data (which, according to the document, stores data in camera space) into a world coordinate system so that they can be merged. The frame pair I use for the Tango listener has COORDINATE_FRAME_START_OF_SERVICE as the base frame and COORDINATE_FRAME_DEVICE as the target frame.

This is the way I implement the transformation:

  1. Retrieve the rotation quaternion from TangoPoseData.getRotationAsFloats() as q_r, and the point position from XYZij as p.

  2. Apply the following rotation, where q_mult is a helper method computing the Hamilton product of two quaternions (I have verified this method against another math library):

    p_transformed = q_mult(q_mult(q_r, p), q_r_conjugated);

  3. Add the translate retrieved from TangoPoseData.getTranslationAsFloats() to p_transformed.

But eventually, points at p_transformed always seem to end up in clutter of partly overlapped point clouds instead of an aligned, merged point cloud.

Am I missing anything here? Is there a conceptual mistake in the transformation?

Thanks in advance.

Dale Z
  • 487
  • 1
  • 5
  • 13

4 Answers4

2

Ken & Vincenzo, thanks for the reply.

I somehow get better results by performing ICP registration using CloudCompare on individual point clouds after they are transformed into world coordinates using pose data alone. Below is a sample result from ~30 scans of a computer desk. Points on the farther end are still a bit off, but with carefully tuned parameters this might be improved. Also CloudCompare's command line interface makes it suitable for batch processing.

Besides the inevitable integration error that needs to be corrected, a mistake I made earlier was wrongly taking the camera space frame (the camera on the device), which is described here in the documentation, to be the same as the OpenGL camera frame, which is the same as the device frame as described here. But they are not.

Also, moving the camera slowly to get more overlap between two adjacent frames also helps registration. And a good visible lighting setup of the scene is important, since besides the motion sensors, Tango also relies on the fish eye camera on its back for motion tracking.

Hope the tips also work for more general cases other than mine.

enter image description here

Dale Z
  • 487
  • 1
  • 5
  • 13
  • I'll try tomorrow with CloudCompare. Which format do you use to export the point cloud? plain text or binary? I wonder how TangoMapper can do all this work in realtime. I'm sure the next problem we will face will be obtaining color information from point. the ij part of XYZij is empty! PS: Damn, I had to answer a lot of question to comment on this thread. – Vincenzo La Spesa Apr 05 '15 at 21:57
  • @VincenzoLaSpesa For debugging convenience, I exported the point clouds as plain text files (`.asc`). There are a few lines for custom headers at the start, but you can specify the number of lines to skip in CloudCompare. After the headers, it is just the three coordinates of a point on each line. According to the [document](https://developers.google.com/project-tango/apis/java/TangoXyzIjData#ijParcelFileDescriptor), it seems that `ijParcelFileDescriptor` is "currently not usable". – Dale Z Apr 07 '15 at 14:08
1

There are two different "standard" forms of the quaternion notation. One has the rotation angle first, i.e. x i j k, and one has the rotation angle last, i.e. x y z w. The Tango API docs list the TangoPoseData::orientation as x y z w. The Wikipedia page on quaternions lists them as x i j k. You might want to check what notation is assumed in your product method.

Ken
  • 309
  • 2
  • 11
  • Thank you for the thought, @Ken. I've been careful with the component arrangement in quaternion implementation and I've verified the correctness of the calculation using Christoph Gohlke's `transformations.py`. The misalignment is partly from misunderstanding the coordinate system in XYZij, which I initially thought was the same as the device coordinate system until I read [this](https://developers.google.com/project-tango/apis/c/struct_tango_x_y_zij#a073f1f352434c57d15f96f35d4035bf9). – Dale Z Mar 30 '15 at 18:41
  • So after negating Y and Z axes in XYZij, the data aligns much better, though still not ideal. The remaining error might possibly result from the drifting issue. I'll report back if it works better with area learning enabled. – Dale Z Mar 30 '15 at 18:42
  • I would be curious what your "mis-alignment" looks like. The returned point cloud is pretty sparse, so it is unlikely that any individual points will match between different returns. – Ken Apr 01 '15 at 14:09
  • You are definitely right. But the problem is that points on the same plane in reality are not even close to being coplanar between two frames - You can see the entire plane has drifted over time. Neither does area learning help much in my case. However, doing a post processing with the Iterative Closest Point algorithm significantly improves the result, if I move the device slowly enough when doing the scans. – Dale Z Apr 02 '15 at 15:35
  • Where is your pose data coming from? Are you getting the most recent pose after you are in the callback for the point cloud data or are you asking for the pose that corresponds to the timestamp in the XYZij struct? You should be asking for the pose at time "timestamp" from the XYZij struct. – Ken Apr 02 '15 at 17:53
1

Where is your pose data coming from? Are you getting the most recent pose after you are in the callback for the point cloud data or are you asking for the pose that corresponds to the timestamp in the XYZij struct? You should be asking for the pose at time "timestamp" from the XYZij struct.

I tried it, it does not work. I tried to queue the pose and get the nearest one to the XYZij.

Look at the blue wall Look at the blue wall The real wall

The real wall

0

we from roomplan.de created an opensource sample how to use pcl in project tango apps. It records pointclouds and transforms them into a common coordinate frame (the StartOf Service Frame). You can find the sample code here: https://github.com/roomplan/tango-examples-java/tree/master/PointCloudJava_with_PCL the specific funtion is in jni/jni_part.cpp function: Java_com_tangoproject_experiments_javapointcloud_PointCloudActivity_saveRotatedPointCloud

If you want the sample to compile, you need to clone the complete folder and integrate pcl into your project. A solution how this can be done can be found on our website.

sample pictures can be viewed at the demo app in the playstore. (Cant post them here yet) https://play.google.com/store/apps/details?id=com.tangoproject.experiments.javapointcloud&hl=en

  • Although this might be a good answer, stating "we from " doesn't fit well in StackOverflow, it sounds like you're promoting your company and that is irrelevant for the question. – Cthulhu Jul 02 '15 at 08:17
  • Hi Max, Are you saying that your code handles the problem of registering PointClouds from a moving Camera/Device (Essentially - Resolving registration issues & compiling a Single PointCloud in a Single Coordinate Frame? ) – ActionFactory Jul 17 '15 at 18:47
  • 1
    @ Strive55 I didn't want it to look like a "hidden advertisement". @ActionFactory What the code does is: Record a pointcloud every 0.2seconds. Save the Pointcloud in camera coordinate system and save a transformed pointcloud in StartOfService coordinate system. So in this system they are aligned and you can use it to record bigger objects. Example can be found here: http://roomplan.de/wp-content/uploads/2015/06/PCD1.png – Max Schneider Jul 24 '15 at 15:36