So here's the gist I've got a program that has a large image composed of many little images and it takes this image and separates it into many smaller images (like frames for a film) that the user can then go and scrub through.
I currently use this method
- (NSMutableArray *)createArrayFromImage: (NSData *)largerImageData
{
UIImage *largerImage = [UIImage imageWithData: largerImageData];
int arraySize = (int)largerImage.size.height/largerImage.size.width; //Find out how many images there are
NSMutableArray *imageArray = [[NSMutableArray alloc] init];
for (int i = 0; i < arraySize; i++) {
CGRect cropRect = CGRectMake(0, largerImage.size.width * i, largerImage.size.width, largerImage.size.width);
CGImageRef imageRef = CGImageCreateWithImageInRect([largerImage CGImage], cropRect);
UIImage *image = [UIImage imageWithCGImage: imageRef];
CGImageRelease(imageRef);
[imageArray addObject: UIImageJPEGRepresentation(image, 1.0)];
NSLog(@"Added image %d", i);
}
NSLog(@"Final size %d", (int)[imageArray count]);
return imageArray;
}
However this is extremely slow due to UIImageJPEGRepresentation
being called and is much faster if I just add the UIImage
directly into the array however when I do this when the user scrubs through the images from the array it starts allocating huge amounts of memory forcing the app to eventually crash. It calls [UIImageView setImage:];
if that helps. Any assistance with this would be extremely appreciated.
ED|T: CGImageCreateWithImageInRect might keep the "largerImage" which causes it to take up so much memory