0

I'm trying to convert (longitude/latitude) locations to rectangles on a phone screen. But for now let's pretend they're normal points on Oxy plane.

Assume the camera is standing at point (X,Y), corresponing to the center of the phone screen. It's pointing in direction (vx,vy), it has 60 degree cone of vision (30 degree clockwise and 30 degree counter-clockwise). The camera direction can rotate freely, so vx, vy in [-1 -> 1].

There's a list of N point (x[], y[]), each represents a location.

At each step, all points that are inside the cone of vision is selected, call them (px[], py[]). This step is simple. Next, I need to draw (px[], py[]) on the phone screen. The biggest problem is when the camera + two points are collinear. I have zero idea of how to handle this. Edit: both points must be shown on screen, so maybe adding some random offset?

Given the camera position, camera looking direction, and a point px[i], py[i], what's its pixel coordinate on the screen? If there's not enough information, then whatever method that can handle the collinear situation is okay.

Edit:

enter image description here

enter image description here

Image 1 shows an example. Given the points + camera position/direction, points 1,2,3,4 are in vision. Then, they are shown on the screen as in image 2.

Possible solution for 2D (no illusion of 3D on the screen):

  1. Horizontal position: use the angle from the camera's direction to determine whether its on the left/right side of the screen. Normalize the angles using the biggest angle. The normalized angle (0->1) is used to determine how far to the left/right that point is, where normAngle = 0 means it's at the middle of the screen, and normAngle = 1 means it's at the leftest/rightest pixel of the screen.

  2. Vertical position: Same as above but use normalized distance from camera instead of angle from camera direction.

Is there a better way?

Duke Le
  • 332
  • 3
  • 14
  • You have also to describe mapping of points onto screen. Camera sees far object but do you need to show all of them? Do you map rectangle on the surface onto screen rectangle? Do you apply some perspecive distortions? Do you map sector onto screen rectangle? Perhaps simple sketch drawing might make things more clear – MBo Dec 28 '20 at 04:58
  • And what is a problem with collinear points - camera should not see objects behind another objects (shadowing)? – MBo Dec 28 '20 at 05:00
  • 1. All objects need to be shown. 2. 1 point on surface -> 1 point on screen (I will handle the drawing rectangle on screen part). 3, 4. As long as all points are shown, it's good. But it needs to be somewhat correct, for example if a point is "righter" than another, then it should be more to the right on the screen). I will add a diagram – Duke Le Dec 28 '20 at 05:10

1 Answers1

1

You can map value point_angle-central_angle onto -Pi/2..Pi/2 (-90..90) range (assuming zero angle is directed top at the screen), so use polar coordinates with some distortion.

rho = (px - camx)^2 + (py - camy)^2
point_angle = atan2(py - camy, px - camx)
deviation = point_angle - cam_angle

screen_rho = somefunction(rho)  
//in simple case just coefficent C*rho to provide proper distance in screen limits
screen_angle = 3 * deviation   //-30 =>-90 

screen_x = screen_rho * cos(screen_angle)
screen_y = screen_rho * sin(screen_angle)
MBo
  • 77,366
  • 5
  • 53
  • 86