0

I want to do a selective masking between two images in iOS similar to the mask function in Blender. There are two images 1 and 2 (resized to same dimensions). Initially only image 1 will be visible but wherever user touches any area upon image1, it becomes transparent and image 2 becomes visible in those regions.

I created a mask-like image using core graphics with touch move. It is basically a full black image with white portions wherever I touched. The alpha is set to 1.0 throughout. I can use this image as a mask and do the necessary by implementing my own image-processing methods which will iterate over each pixel, check it and set according values. Now this method will be called inside touch move and so my method might slow the entire process (specially for 8MP camera images).

I want to know how this can be achieved by using Quartz Core or Core Graphics which will be efficient enough to run in big images.

The code I have so far :

- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
    mouseSwiped = YES;

    UITouch *touch = [touches anyObject];
    CGPoint currentPoint = [touch locationInView:staticBG];

    UIGraphicsBeginImageContext(staticBG.frame.size);
    [maskView.image drawInRect:CGRectMake(0, 0, maskView.frame.size.width, maskView.frame.size.height)];

    CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
    CGContextSetLineWidth(UIGraphicsGetCurrentContext(), 20.0);
    CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), 1.0, 1.0, 1.0, 0.0);
    CGContextBeginPath(UIGraphicsGetCurrentContext());
    CGContextMoveToPoint(UIGraphicsGetCurrentContext(), lastPoint.x, lastPoint.y);
    CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), currentPoint.x, currentPoint.y);
    CGContextStrokePath(UIGraphicsGetCurrentContext());

    maskView.image = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    lastPoint = currentPoint;
    mouseMoved++;
    if (mouseMoved == 10)
        mouseMoved = 0;

    staticBG.image = [self maskImage:staticBG.image withMask:maskView.image];
    //maskView.hidden = NO;
}  

- (UIImage*) maskImage:(UIImage *)baseImage withMask:(UIImage *)maskImage
{
    CGImageRef imgRef = [baseImage CGImage];
    CGImageRef maskRef = [maskImage CGImage];
    CGImageRef actualMask = CGImageMaskCreate(CGImageGetWidth(maskRef),
                                          CGImageGetHeight(maskRef),
                                          CGImageGetBitsPerComponent(maskRef),
                                          CGImageGetBitsPerPixel(maskRef),
                                          CGImageGetBytesPerRow(maskRef),
                                          CGImageGetDataProvider(maskRef), NULL, false);
    CGImageRef masked = CGImageCreateWithMask(imgRef, actualMask);
    return [UIImage imageWithCGImage:masked];
}

The maskImage method is not working as it creates a mask image depending upon alpha values. I went through this link : Creating Mask from Path but I cannot understand the answer.

Community
  • 1
  • 1
Soumyajit
  • 435
  • 1
  • 9
  • 19

3 Answers3

0

First of all I will mention something that I hope you know already.

  • masking works by taking the alpha values only.
  • creating an image with core graphics at each touchMove is a pretty huge overhead & you should try to avoid or use some other way of doing things.
  • try to use a static mask image.

I would like to propose you look at this from an inverted point of view.

i.e Instead of trying to make a hole in the top image trying to view the bottom, why not place the bottom image on top & mask the bottom image so that it would show user's touch point covering up the top view at specific parts. ?

I've done an example for you to get an idea over here > http://goo.gl/Zlu31T

Good luck & do post back if anything is not clear. I do believe there are much better and optimised way of doing this though. "_"

nsuinteger
  • 1,503
  • 12
  • 21
  • Ya your implementation is fast enough in real time but can you give me any idea about saving the path, I mean suppose i start from the middle and go diagonally top then i want to see the apple logo image through the whole path not a particular circle around the touch point. – Soumyajit Oct 07 '13 at 05:18
  • I made my query much more clear here : http://stackoverflow.com/questions/19218422/rgbstroke-image-pixels-instead-of-color-ios Please have a look.. – Soumyajit Oct 07 '13 at 06:43
  • When you have the image from your CGPath, set to the mask layer's content property like I've done in the example. `masklayer.contents = (__bridge id)(imagefromCGPath.CGImage); bottomimage.layer.mask = masklayer; ` To check the performance you have to run this on a real device. Keep in mind that CPU operations run faster on simulator while GPU operations runs very slow on simulator as simulator has to create a software based GPU code, so GPU operations run much faster on a real device. – nsuinteger Oct 07 '13 at 06:56
0

Since you're doing this in real time, I suggest you fake it while editing, and if you need to output the image later on, you can mask it for real, since it might take some time( not much, just not fast enough to do it in real time). And by faking I mean to put the image1 as background and on top of it put the hidden image2. Once the user touches the point, set the bounds of the image2 UIImageView to CGRect rect= CGRectMake(touch.x - desiredRect.size.width/2, touch.y - desiredRect.size.height/2, desiredRect.size.width, desiredRect.size.height); and make it visible. desiredRect would be the portion of the image2 that you want to show. Upon lifting the finger, you can just hide the image2 UIImageView so the image1 is fully visible. It is the fastest way I could think right now if your goal isn't to output the image at that very moment.

Mercurial
  • 2,095
  • 4
  • 19
  • 33
  • This method seems interesting. I will try it out and get back to you. Interestingly many apps have this user driven masking feature, I thought there might be some direct way of doing it (or maybe people are doing it by openGL shaders in real-time) – Soumyajit Oct 05 '13 at 16:01
  • Same thing here, maybe my question is not fully clear but I want the change to be done permanently to the image itself and if i draw a path the back image will be visible through the entire path. If i rub the whole imageView with my finger then the front image should be totally gone and back image totally visible. – Soumyajit Oct 07 '13 at 05:26
  • http://stackoverflow.com/questions/19218422/rgbstroke-image-pixels-instead-of-color-ios – Soumyajit Oct 07 '13 at 06:44
0

Use this code it will help for masking the two UIImages

CGSize newSize = CGSizeMake(320, 377);
                UIGraphicsBeginImageContext( newSize );

            // Use existing opacity as is
            [ backGroundImageView.image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
            // Apply supplied opacity
            [self.drawImage.image drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:0.8];

            UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();

            UIGraphicsEndImageContext();

            imageData = UIImagePNGRepresentation(newImage);
Swati
  • 2,870
  • 7
  • 45
  • 87
  • I tried this method with a little change blendMode:kCGBlendModeColorMultiply still its taking a lot of time – Soumyajit Oct 05 '13 at 16:03