I'm getting crazy while trying to merge two UIImageView.
The situation:
- a background UIImageView (userPhotoImageView)
- an overlayed UIImageView (productPhotoImageView) that can be streched, pinched and rotated
I call my function on the UIImages but I can take coords and new stretched size from the UIImageView containing them (they are synthesized in my class)
- (UIImage *)mergeImage:(UIImage *)bottomImg withImage:(UIImage *)topImg;
Maybe simplest way would be rendering the layer of the top UIImageView in a the new CGContextRef like this:
[bottomImg drawInRect:CGRectMake(0, 0, bottomImg.size.width, bottomImg.size.height)];
[productPhotoImageView.layer renderInContext:ctx];
But in this way I loose the rotation effect previosly applied by gestures.
A second way would be trying to apply AffineTransformation to UIImage to reproduce GestureEffects and then draw it in the context like this:
UIImage * scaledTopImg = [topImg imageByScalingProportionallyToSize:productPhotoView.frame.size];
UIImage * rotatedScaledTopImg = [scaledTopImg imageRotatedByDegrees:ANGLE];
[rotatedScaledTopImg drawAtPoint:CGPointMake(productPhotoView.frame.origin.x, productPhotoView.frame.origin.y)];
The problem of this second approach is that I'm not able to exactly get the final rotation degrees (the ANGLE parameter that should be filled in the code above) amount since the user started to interact, this because the RotationGesture is reset to 0 after applying so the next callback is a delta from the current rotation.
For sure the most easy way would be the first one, freezing the two UIImageViews as they are displayed in that very moment but actually I still didn't find anyway to do it.