7

I added a new iOS 8 Photo Extension to my existing photo editing app. My app has quite a complex filter pipeline and needs to keep multiple textures in memory at a time. However, on devices with 1 GB RAM I'm easily able to process 8 MP images.

In the extension, however, there are much higher memory constraints. I had to scale down the image to under 2 MP in order to get it processed without crashing the extension. I also figured that the memory problems only occurred when not having a debugger attached to the extension. With it, everything works fine.

I did some experiments. I modified a memory budget test app to work within an extension and came up with the following results (showing the amount of RAM in MB that can be allocated before crashing):

╔═══════════════════════╦═════╦═══════════╦══════════════════╗
║        Device         ║ App ║ Extension ║ Ext. (+Debugger) ║
╠═══════════════════════╬═════╬═══════════╬══════════════════╣
║ iPhone 6 Plus (8.0.2) ║ 646 ║       115 ║              645 ║
║ iPhone 5 (8.1 beta 2) ║ 647 ║        97 ║              646 ║
║ iPhone 4s (8.0.2)     ║ 305 ║        97 ║              246 ║
╚═══════════════════════╩═════╩═══════════╩══════════════════╝

A few observations:

  • With the debugger attached the extension behaves like the "normal" app
  • Even though the 4s has only half the total amount of memory (512 MB) compared to the other devices it gets the same ~100 MB from the system for the extension.

Now my question: How am I supposed to work with this small amount of memory in a Photo Editing extension? One texture containing an 8 MP (camera resolution) RGBA image eats ~31 MB alone. What is the point of this extension mechanism if I have to tell the user that full size editing is only possible when using the main app?

Did one of you also reach that barrier? Did you find a solution to circumvent this constraint?

Frank Rupprecht
  • 9,191
  • 31
  • 56
  • I should point out that UIImage is to blame here; Apple documentation states that any 1,920 × 1,080 photo or video referenced by a UIImage object will inevitably create a memory problem for a Photo Editing Extension. You're not working with phone limitations inside the Photos app, but app-specific limitations (I think it has something to do with protection from app-crashing malware); pointing to the amount of camera memory or whatever is unavailing. Sadly, you must reduce the size of the media to 1,280 × 720 or less, they say. Tiling is not an option. – James Bush Oct 21 '15 at 03:25
  • Curious how this changes with iOS 10. Per @rickster's mention of [CIImageProcessorKernel](https://developer.apple.com/reference/coreimage/ciimageprocessorkernel) below. Specifically when working with other image technology like GPUImage. – brandonscript Jan 10 '17 at 21:16

3 Answers3

3

I am developing a Photo Editing extension for my company, and we are facing the same issue. Our internal image processing engine needs more than 150mb to apply certain effects to an image. And this is not even counting panorama images which will take around ~100mb of memory per copy.

We found only two workarounds, but not an actual solution.

  1. Scaling down the image, and applying the filter. This will require way less memory, but the image result is terrible. At least the extension will not crash.

or

  1. Use CoreImage or Metal for image processing. As we analyzed the Sample Photo Editing Extension from Apple, which uses CoreImage, can handle very large image and even panoramas without quality or resolution loss. Actually, we were not able to crash the extension by loading very large images. The sample code can handle panoramas with a memory peek of 40mb, which is pretty impressive.

According to the Apple's App Extension Programming Guide, page 55, chapter "Handling Memory Constraints", the solution for memory pressure in extensions is to review your image-processing code. So far we are porting our image processing engine to CoreImage, and the results are far better than our previous engine.

I hope I could help a bit. Marco Paiva

brandonscript
  • 68,675
  • 32
  • 163
  • 220
marcopaivaf
  • 199
  • 1
  • 4
  • 2
    Thanks, Marco. I guess CoreImage does tiling under the hood. I will try to use Instruments to figure out what it's doing exactly. My problem is that I need textures that support storing signed values, and as far as I know I can't do that with CoreImage right now. Another point is that custom CoreImage filters are only supported in iOS 8 and I still want to support iOS 7 in my main app. I guess I'll end up implementing a tiling mechanism on my own. But it's good to know that CoreImage can handle large images. Thanks again. – Frank Rupprecht Oct 27 '14 at 20:40
  • 1
    Note that Core Image provides several opportunities to insert your own image processing (while benefiting from CI memory/GPU/color/etc management). Aside from the ability to write custom kernels that's been there since forever, iOS 10 introduces [`CIImageProcessorKernel `](https://developer.apple.com/reference/coreimage/ciimageprocessorkernel), which lets you insert technologies other than CI into the CI image processing chain. – rickster Jan 10 '17 at 20:57
0

If you're using a Core Image "recipe," you needn't worry about memory at all, just as Marco said. No image on which Core Image filters are applied is rendered until the image object is returned to the view.

That means you could apply a million filters to a highway billboard-sized photo, and memory would not be the issue. The filter specifications would simply be compiled into a convolution or kernel, which all come down to the same size—no matter what.

Misunderstandings about memory management and overflow and the like can be easily remedied by orienting yourself with the core concepts of your chosen programming language, development environment and hardware platform.

Apple's documentation introducing Core Image filter programming is sufficient for this; if you'd like specific references to portions of the documentation that I believe pertain specifically to your concerns, just ask.

James Bush
  • 1,485
  • 14
  • 19
  • Sorry, this was simply not helpful. I know how Core Image works and what it's capable of. It can optimize some subsequent filter steps by compiling them into one, but it can't always do that. For instance, two subsequent convolution kernels can't be compiled to one—you need an intermediate result. And as I said to Marco, Core Image doesn't support all the features I need for my pipeline. I already optimized my pipeline to "only" use 4 textures at a time, but that's still too much memory for the extension... – Frank Rupprecht May 20 '15 at 07:43
  • "Four textures at a time?" How do you use even one "texture" in Core Image? That doesn't make sense... – James Bush May 21 '15 at 16:27
  • As I said, I can't use Core Image because it doesn't offer everything I need. I wrote my own image processing pipeline using OpenGL. – Frank Rupprecht May 21 '15 at 17:49
  • I should have the OpenGL subset, glslang. You must be talking about OpenGL in its entirety, which is not what you would likely use If you're writing a Photo Editing Extension. It's easier to run OpenGL kernel code written in glslang than not; you simply load it as a bundle resource into a custom Core Image filter. Core Image is not limited to the built-in filters, which—you're right—won't work for everything; I wasn't recommending it over OpenGL. OpenGL (glslang) kernels can be loaded into a custom Core Image filter; then, you can combine Core Image filters with your kernel, if needs be. – James Bush Oct 21 '15 at 03:09
  • Sorry, Frank; I wasn't translating texture to sampler. At the time you made the comment, you were right; iOS did not allow you to pass more than one sampler object to a CIKernel class (and you couldn't create CISampler objects, either). With iOS 9, that's changed; just like MacOS X, CISampler objects can be created, and passed in multiples to a CIKernel class. Again, sorry; I should have taken more time to understand what you're saying (and, I'd be very interested to see you successfully pass multiple "textures" using a custom Core Image filter in an iOS app. – James Bush Oct 21 '15 at 03:15
0

Here is how you apply two consecutive convolution kernels in Core Image, with the "intermediary result" between them:

- (CIImage *)outputImage { 

const double g = self.inputIntensity.doubleValue;
const CGFloat weights_v[] = { -1*g, 0*g, 1*g,
                              -1*g, 0*g, 1*g,
                              -1*g, 0*g, 1*g};

CIImage *result = [CIFilter filterWithName:@"CIConvolution3X3" keysAndValues:
          @"inputImage", self.inputImage,
          @"inputWeights", [CIVector vectorWithValues:weights_v count:9],
          @"inputBias", [NSNumber numberWithFloat:1.0],
          nil].outputImage;

CGRect rect = [self.inputImage extent];
rect.origin = CGPointZero;

CGRect cropRectLeft = CGRectMake(0, 0, rect.size.width, rect.size.height);
CIVector *cropRect = [CIVector vectorWithX:rect.origin.x Y:rect.origin.y Z:rect.size.width W:rect.size.height];
result = [result imageByCroppingToRect:cropRectLeft];

result = [CIFilter filterWithName:@"CICrop" keysAndValues:@"inputImage", result, @"inputRectangle", cropRect, nil].outputImage;


const CGFloat weights_h[] = {-1*g, -1*g, -1*g,
    0*g,   0*g,   0*g,
    1*g,   1*g,     1*g};


result = [CIFilter filterWithName:@"CIConvolution3X3" keysAndValues:
          @"inputImage", result,
          @"inputWeights", [CIVector vectorWithValues:weights_h count:9],
          @"inputBias", [NSNumber numberWithFloat:1.0],
          nil].outputImage;

result = [result imageByCroppingToRect:cropRectLeft];

result = [CIFilter filterWithName:@"CICrop" keysAndValues:@"inputImage", result, @"inputRectangle", cropRect, nil].outputImage;

result = [CIFilter filterWithName:@"CIColorInvert" keysAndValues:kCIInputImageKey, result, nil].outputImage;

return result;

}

James Bush
  • 1,485
  • 14
  • 19
  • As I said, I know how Core Image works and how to apply (very simple) convolution filters with it. What I was trying to tell you in your other answer is that a) even Core Image needs to allocate multiple buffers (textures) for some scenarios and b) that I can't do everything I need with Core Image. I seems Core Image shines when it comes to memory footprint—my guess is it's because it supports tiling under the hood. However, that also makes it comparably slow, as Brad Larson points out in his GPUImage framework. – Frank Rupprecht May 21 '15 at 18:03