1

When using an iPhone Objective C method that accepts CGFloats, e.g. [UIColor colorWithRed:green:blue:], is it important to append a f to constant arguments to specifiy them explicitly as floats, e.g. should I always type 0.1f rather than 0.1 in such cases? Or does the compiler automatically cast 0.1 (which is a double in general) to 0.1f (which is a float) at compile time? I don't wish to have these casts happen at run time because they would unneccessarily hog performance.

Thanks in advance

MrMage

mskfisher
  • 3,291
  • 4
  • 35
  • 48
MrMage
  • 7,282
  • 2
  • 41
  • 71

1 Answers1

2

It's not important; it won't break anything to use a double-precision constant where a single-precision constant is expected.

However, if you have turned on the warning about implicit 64-bit-to-32-bit conversions and are building for 32-bit architectures (which I believe includes the iPhone), then you'll want to use single-precision constants simply to avoid getting that warning.

(Alternatively, you could set that setting to explicitly off, with an architecture condition turning it on for 64-bit architectures. But that currently only matters if you're also using some of your code in a Mac application.)

Peter Hosey
  • 95,783
  • 15
  • 211
  • 370
  • I know that nothing breaks, but when does the conversion from double to float occur? At compile time (would be fine) or at run time (then I'd add the f's)? – MrMage Oct 03 '09 at 11:06
  • Thank you. Your comment is the real answer to my question. +1 – MrMage Oct 04 '09 at 11:21