I want to do a selective masking between two images in iOS similar to the mask function in Blender. There are two images 1 and 2 (resized to same dimensions). Initially only image 1 will be visible but wherever user touches any area upon image1, it becomes transparent and image 2 becomes visible in those regions.
I created a mask-like image using core graphics with touch move. It is basically a full black image with white portions wherever I touched. The alpha is set to 1.0 throughout. I can use this image as a mask and do the necessary by implementing my own image-processing methods which will iterate over each pixel, check it and set according values. Now this method will be called inside touch move and so my method might slow the entire process (specially for 8MP camera images).
I want to know how this can be achieved by using Quartz Core or Core Graphics which will be efficient enough to run in big images.
The code I have so far :
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
mouseSwiped = YES;
UITouch *touch = [touches anyObject];
CGPoint currentPoint = [touch locationInView:staticBG];
UIGraphicsBeginImageContext(staticBG.frame.size);
[maskView.image drawInRect:CGRectMake(0, 0, maskView.frame.size.width, maskView.frame.size.height)];
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), 20.0);
CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), 1.0, 1.0, 1.0, 0.0);
CGContextBeginPath(UIGraphicsGetCurrentContext());
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), lastPoint.x, lastPoint.y);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), currentPoint.x, currentPoint.y);
CGContextStrokePath(UIGraphicsGetCurrentContext());
maskView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
lastPoint = currentPoint;
mouseMoved++;
if (mouseMoved == 10)
mouseMoved = 0;
staticBG.image = [self maskImage:staticBG.image withMask:maskView.image];
//maskView.hidden = NO;
}
- (UIImage*) maskImage:(UIImage *)baseImage withMask:(UIImage *)maskImage
{
CGImageRef imgRef = [baseImage CGImage];
CGImageRef maskRef = [maskImage CGImage];
CGImageRef actualMask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask(imgRef, actualMask);
return [UIImage imageWithCGImage:masked];
}
The maskImage method is not working as it creates a mask image depending upon alpha values. I went through this link : Creating Mask from Path but I cannot understand the answer.