2

I thought this would be rather straight forward but it seems it's not.

Things I have noticed when trying to crop an image like this:

#import "C4Workspace.h"

@implementation C4WorkSpace{
    C4Image *image;
    C4Image *copiedImage;
}

-(void)setup {
    image=[C4Image imageNamed:@"C4Sky.png"];
    //image.width=200;
    image.origin=CGPointMake(0, 20);
    C4Log(@"       image width %f", image.width);
    //[self.canvas addImage:image];

    copiedImage=[C4Image imageWithImage:image];
    [copiedImage crop:CGRectMake(50, 0, 200, 200)];
    copiedImage.origin=CGPointMake(0, 220);
    [self.canvas addObjects:@[image, copiedImage]];
    C4Log(@"copied image width %f", copiedImage.width);
}

@end
  1. origin of CGRectMake (the x and y coordinates) do not start from the upper left corner, but from the lower left and the height goes up instead of down then.

  2. size of cropped image is actually the same as from the original image. I suppose the image doesn't really get cropped but only masked?

  3. different scales In the example above I'm actually not specifying any scale, nevertheless original and cropped image do NOT have the same scale. Why?

I'm actually wondering how this function can be useful at all then... It seems that it would actually make more sense to go into the raw image data to crop some part of an image, rather than having to guess which area has been cropped/masked, so that I'd know where exactly the image actually remains...

Or maybe I'm doing something wrong?? (I couldn't find any example on cropping an image, so this is what I made...)

C4 - Travis
  • 4,502
  • 4
  • 31
  • 56
suMi
  • 1,536
  • 1
  • 17
  • 30

1 Answers1

1

What you have found is a bug in the expected implementation of the crop: filter being run on your image.

1) The crop: method is actually an implementation done using Core Graphics, and is specifically running a Core Image filter (CIFilter) on your original image. The placement of (0,0) in Core Graphics is in the bottom left corner of the image. This is why the origin is off.

2) Yes. I'm not sure if this is should be considered a bug or a feature, something for me to think about... This actually has to do with the way that "filters" are designed.

3) Because of the bug in the way crop: is built, the filter doesn't account for the fact that the image scale should be 2.0, and it is re-rendering at 1.0 (and it shouldn't do this)

Finally, you've found a bug. I've listed it to be fixed here:

https://github.com/C4Framework/C4iOS/issues/110

The reason for much of the confusion, I believe, is that I built the filter methods for C4Image when I was originally working on a device / simulator that wasn't retina. I haven't had the opportunity to revisit how those are built, there also haven't been any questions about this issue before!

C4 - Travis
  • 4,502
  • 4
  • 31
  • 56
  • thanks Travis! regarding 2: I wouldn't call it a feature. If I want to display a part of an image only I'd just mask that image but not crop it. So I think it's a bug, not a feature. In my case I actually need to work with about 30 of those cropped images within my app. So I really need to crop the images. At the moment I can see 2 workarounds: 1. making a new image and going through the old one pixel by pixel, only copying the ones I need. 2. saving the part of the image to the apps document directory and reloading it from there. I'll have a look at both methods.Will post new questions later – suMi Oct 23 '13 at 07:11
  • oh. I just saw you actually already gave me the solution right here: http://stackoverflow.com/questions/19490227/c4-saving-part-of-an-image/19528248?noredirect=1#19528248 – suMi Oct 23 '13 at 07:35
  • by feature, i meant in C4's api itself... it's a design pattern, because all of the filters work directly on the images themselves... so `crop:` is a feature of the Core Image API and should probably remain a feature, working the way it does now, in the C4 API... – C4 - Travis Oct 23 '13 at 20:11