I'm doing some drawing relative to a scaled image so I end up with fractional CGPoints. I am scaling the results from the CoreImage face detection routine.
Do I want to round these myself or leave it to iOS to do it when I use these points in CGPathAddLineToPoint
calls? If it is better to round, should I round up or down?
I've read about pixel boundaries, etc. but I'm not sure how to apply that here. I am drawing to a CALayer
CGPoint leftEye = CGPointMake((leftEyePosition.x * xScale),
(leftEyePosition.y * yScale));
// result
features {
faceRect = "{{92, 144.469}, {166.667, 179.688}}";
hasLeftEyePosition = 1;
hasMouthPosition = 1;
hasRightEyePosition = 1;
leftEyePosition = "{142.667, 268.812}";
mouthPosition = "{176, 189.75}";
rightEyePosition = "{207.333, 269.531}";
}