0

So I have the following code and on the line that is const double colorMasking[6] right now it is a double but if i clean and build it says Incompatible pointer types passing double should be float. Then however if I change it to float the error goes away but then once I clean and build again it says Incompatible pointer types passing float should be double. The exact opposite of what I just did. Any idea what is going on here?

-(UIImage *)changeWhiteColorTransparent: (UIImage *)image
{
    CGImageRef rawImageRef=image.CGImage;

    const double colorMasking[6] = {222, 255, 222, 255, 222, 255};

    UIGraphicsBeginImageContext(image.size);
    CGImageRef maskedImageRef=CGImageCreateWithMaskingColors(rawImageRef, colorMasking);
    {
        //if in iphone
        CGContextTranslateCTM(UIGraphicsGetCurrentContext(), 0.0, image.size.height);
        CGContextScaleCTM(UIGraphicsGetCurrentContext(), 1.0, -1.0);
    }

    CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, image.size.width, image.size.height), maskedImageRef);
    UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
    CGImageRelease(maskedImageRef);
    UIGraphicsEndImageContext();
    return result;
}
CMOS
  • 2,727
  • 8
  • 47
  • 83
  • 1
    Stupid question: What line is flagged with that error message. – Hot Licks Feb 11 '15 at 02:02
  • Does it work if `colorMasking` is a `CGFloat` instead of a `double`? – NobodyNada Feb 11 '15 at 02:02
  • @HotLicks I think it is `CGImageRef maskedImageRef=CGImageCreateWithMaskingColors(rawImageRef, colorMasking);`, since that is the only time he passes `colorMasking`. – NobodyNada Feb 11 '15 at 02:02
  • I also wonder if it's valid to initialize a float array with ints. – Hot Licks Feb 11 '15 at 02:03
  • Yes sorry, this is the line that throws the error maskedImageRef=CGImageCreateWithMaskingColors(rawImageRef, colorMasking); and colorMasking is the value that gets changed from float to double – CMOS Feb 11 '15 at 02:03
  • I get confused: On which platforms is CGFloat a `double` and on which is it `float`? – Hot Licks Feb 11 '15 at 02:07
  • 1
    The docs provide: `CGImageRef CGImageCreateWithMaskingColors ( CGImageRef image, const CGFloat components[] );` so `colorMasking` should be of type `CGFloat`. – zaph Feb 11 '15 at 02:07
  • CGFloat worked! Thanks – CMOS Feb 11 '15 at 02:10
  • @HotLicks `CGFloat` is a `float` on 32-bit systems, and a `double` on 64-bit systems. – NobodyNada Feb 11 '15 at 19:45

2 Answers2

3

Change

const double colorMasking[6] = {222, 255, 222, 255, 222, 255};

to

const CGFloat colorMasking[6] = {222, 255, 222, 255, 222, 255};

CGImageCreateWithMaskingColors expects a CGFloat, which is typedefed to float on 32-bit systems, and double on 64-bit. When compiling using float:

  1. Compiler compiles 32-bit binary and sees your float array, which is what the function expects.
  2. Compiler compiles 64-bit binary and sees your float array, but the function expects a double array.

The opposite happens when you use double instead of float.

Here is the definition of CGFloat (in CoreGraphics/CGBase.h):

#if defined(__LP64__) && __LP64__
# define CGFLOAT_TYPE double
# define CGFLOAT_IS_DOUBLE 1
# define CGFLOAT_MIN DBL_MIN
# define CGFLOAT_MAX DBL_MAX
#else
# define CGFLOAT_TYPE float
# define CGFLOAT_IS_DOUBLE 0
# define CGFLOAT_MIN FLT_MIN
# define CGFLOAT_MAX FLT_MAX
#endif

typedef CGFLOAT_TYPE CGFloat;
NobodyNada
  • 7,529
  • 6
  • 44
  • 51
1

The docs provide: CGImageRef CGImageCreateWithMaskingColors ( CGImageRef image, const CGFloat components[] ); so colorMasking should be of type CGFloat.

zaph
  • 111,848
  • 21
  • 189
  • 228