I thought this would be rather straight forward but it seems it's not.
Things I have noticed when trying to crop an image like this:
#import "C4Workspace.h"
@implementation C4WorkSpace{
C4Image *image;
C4Image *copiedImage;
}
-(void)setup {
image=[C4Image imageNamed:@"C4Sky.png"];
//image.width=200;
image.origin=CGPointMake(0, 20);
C4Log(@" image width %f", image.width);
//[self.canvas addImage:image];
copiedImage=[C4Image imageWithImage:image];
[copiedImage crop:CGRectMake(50, 0, 200, 200)];
copiedImage.origin=CGPointMake(0, 220);
[self.canvas addObjects:@[image, copiedImage]];
C4Log(@"copied image width %f", copiedImage.width);
}
@end
origin of CGRectMake (the x and y coordinates) do not start from the upper left corner, but from the lower left and the height goes up instead of down then.
size of cropped image is actually the same as from the original image. I suppose the image doesn't really get cropped but only masked?
different scales In the example above I'm actually not specifying any scale, nevertheless original and cropped image do NOT have the same scale. Why?
I'm actually wondering how this function can be useful at all then... It seems that it would actually make more sense to go into the raw image data to crop some part of an image, rather than having to guess which area has been cropped/masked, so that I'd know where exactly the image actually remains...
Or maybe I'm doing something wrong?? (I couldn't find any example on cropping an image, so this is what I made...)