0

I was checking the allocations of my OpenCV app on iPhone and perceived that photos from the camera takes about 300kb and photos transformed by the app take 6 MB (20 times bigger).

Isn`t that strange? Look at the function I am using to transform UIImage from cvMat:

+ (UIImage *)UIImageFromCVMat:(const cv::Mat&)cvMat withOrientation:(UIImageOrientation) orientation{
    NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];

    CGColorSpaceRef colorSpace;

    if (cvMat.elemSize() == 1) {
        colorSpace = CGColorSpaceCreateDeviceGray();
    } else {
        colorSpace = CGColorSpaceCreateDeviceRGB();
    }

    CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);

    CGImageRef imageRef = CGImageCreate(cvMat.cols,                                     // Width
                                        cvMat.rows,                                     // Height
                                        8,                                              // Bits per component - ok
                                        8 * cvMat.elemSize(),                           // Bits per pixel - ok
                                        cvMat.step[0],                                  // Bytes per row - ok
                                        colorSpace,                                     // Colorspace - ok
                                        kCGImageAlphaNone | kCGBitmapByteOrderDefault,  // Bitmap info flags
                                        provider,                                       // CGDataProviderRef
                                        NULL,                                           // Decode
                                        false,                                          // Should interpolate
                                        kCGRenderingIntentDefault);                     // Intent

    UIImage *image = [[UIImage alloc] initWithCGImage:imageRef scale:1.0 orientation:orientation];
    CGImageRelease(imageRef);
    CGDataProviderRelease(provider);
    CGColorSpaceRelease(colorSpace);

    return image;
}

I cannot see why the generated image is so large =/

Any ideas?

Cheers,

marcelosalloum
  • 3,481
  • 4
  • 39
  • 63

1 Answers1

1

Larger size is expected. The image on your disk is likely compressed as jpg, its size can be made smaller depending on the level of compression (the quality is lost when the image is compressed too much).

When the image is stored in a matrix, it is not compressed. Here's a formula to estimate the size of image:

image_size_in_bytes = image_width x image_height x number_of_channels x bit_depth

Suppose that we have an RGB image, which is 1280 by 960 in size. This can take around 220 KB of storage on disk in jpeg format (it can be more or less depending on the level of compression that was used to store the image). When you load an image to a UIImage it will use about 17 times more memory:

1280 x 960 x 3 x 8 = 3,686,400 Bytes = 3.69 MB  
Alexey
  • 5,898
  • 9
  • 44
  • 81
  • Thanks Alexey, it explains why it is taking so much space but the thing is: Before converting to a cv::Mat and processing the images I already had 2 UIImages about 300kb each. BUT when they are converted from cv::Mat to UIImages, they take too much space. How to compress the new images so they don't occupy (about) 6MB but just (around) 300kb? – marcelosalloum Jun 09 '13 at 17:00
  • Actually I just found out a solution here: http://stackoverflow.com/questions/8112166/compress-uiimage-but-keep-size, lets see if it works =) – marcelosalloum Jun 09 '13 at 17:04
  • Yes, it depends how you store data in UIImage. Glad you found the solution, you can also look at [this answer](http://stackoverflow.com/a/1296933/1121420) – Alexey Jun 09 '13 at 17:27