Double & CGFloat both print the same value to the console (#2), but only the Double is issued the warning (#1):
(#1) - The warning
(#2) - Printing each to the console
DOUBLE = 9.223372036854776e+18
CGFLOAT = 9.223372036854776e+18
Many articles mention that a CGFloat is a Float on 32-bit platforms & a Double on a 64-bit platform, but doesn't that only refer to the backing storage? Is there more going on behind the scenes which causes the CGFloat to be more accurate than the Double?