2

my use case is only concerned with locationing, in fact only 2-d locationing. so a lot of the cool capabilities in tango are probably not useful to me. so I'm trying to see if i could implement the location algorithm myself.

from teardown reports it seems the 9dof sensors are pretty commodity hardware. the basic integration-based location algorithm (even with magnetic field calibration) has been mature knowledge. what algorithm does tango use? from the description it seems that tango tries to aid in navigation by using the images it sees as a reference, sort of like the "terrain-following" mode in cruise missiles, is this right? this would be too ccomplex for me to implemente

teddy teddy
  • 3,025
  • 6
  • 31
  • 48

2 Answers2

3

You may easily get 2D position using the TangoPoseData with the correct coordinate system:

Project Tango uses a right-handed, local-level frame for the START_OF_SERVICE and AREA_DESCRIPTION coordinate frames. This convention sets the Z-axis aligned with gravity, with Z+ pointed upwards, and the X-Y plane is perpendicular to gravity and locally level with the ground plane. This local-level convention is based on the local east-north-up (ENU) earth-based coordinate system. Instead of true north, Project Tango uses the direction the back of the device is pointed when the service started as the Y axis, and the X axis is pointed to the right. The START_OF_SERVICE and AREA_DESCRIPTION base coordinate frames of the API will use this local-level frame convention.

Said more simply, use the pose data y/x coordinates for your space as you would latitude/longitude for the earth.

Heading data is also derived from the TangoPoseData and can be converted from quaternion to euler angles. Euler angles may be easier for you to use in your 2D location app.

Tango uses 3D to increase the confidence of its position within the space...even if you don't need 3D. I would let Tango do the hard stuff and extract the 2D position so you can focus on your app.

Aaron Roller
  • 1,074
  • 1
  • 14
  • 19
  • I'd like to learn more about the internals of the algorithms. the sensor fusion algorithms basically try to put all available signals into a Kalman filter. apart from camera, I can see using earth magnetic field to correct gyro integration results; this link http://robottini.altervista.org/kalman-filter-vs-complementary-filter also uses accelerometer (integration) to derive heading (but not roll), as a potential parallel to gyro read out ; does Tango use the accelerometer-derived heading ? – teddy teddy Oct 02 '15 at 20:38
  • The heading comes from the pose data and to my knowledge the pose data is a hybrid of the IMU enhanced by localization techniques, but knowing about the internals is not really my thing. – Aaron Roller Oct 03 '15 at 06:49
1

Tango uses the camera images to detect any change in position. And uses the IMU for device rotation and acceleration. Try blocking the camera and using the Motion Tracking app, it will fail.

Ronica Jethwa
  • 51
  • 1
  • 7
  • thanks, do you mean the omnicamera or the regular camera? I was thinking whether it would be possible to implement the VIO algorithm myself on a much cheaper device, without the specialized omnicamera hardware – teddy teddy Oct 05 '15 at 17:14