Good day all, I am using Google's new Face API (Link here) which works with improved FaceDetection. One of the things you get back is a List of Landmark objects, each of which has an X and Y coordinate to use.
With these coordinates, I am trying to figure out the center of the picture, but figuring out the correct numbers is proving difficult.
Here is what I know so far:
1) Unlike in the old ways, it is no longer 1000, 1000 by -1000, -1000
2) The coordinates that are returned are in float format and are, "...the (x, y) position of the landmark where (0, 0) is the upper-left corner of the image. The point is guaranteed to be within the bounds of the image." (Source)
3) When I print out those coordinates in the log, I get numbers that don't seem to match my screen size in pixels (1440w,2368h). Some examples of the positions I am getting when I print them out are:
- 464.90558,1112.7573
- -19.159714,218.88104
- 28.383072,196.1712
- -130.06908,1071.8779
Which makes no sense since I don't understand how coordinates can be negative if the top left is (0,0).
So the question is, How are these coordinates being determined? Is it with respect to the size of the screen somehow? Are they being converted using DP in some way? Do they have a static cap?
Thanks all,
PGMac