1

I create a UIimage using the imageWithData method:

- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
UIImage *chosenImage = [info objectForKey:@"UIImagePickerControllerOriginalImage"];
NSData *data1 = UIImageJPEGRepresentation(chosenImage, 1);
NSData *data2 = UIImageJPEGRepresentation(chosenImage, 0.5);
NSLog(@"data1 = %lu;;;;;;;data2 = %lu",[data1 length],[data2 length]);

UIImage *nimg = [UIImage imageWithData:data2];
NSData *data30 = UIImageJPEGRepresentation(nimg, 1);
NSData *data31 = UIImageJPEGRepresentation(nimg, 0.8);
NSLog(@"data30 = %lu;;;;;;data31 = %lu;;;;;;",[data30 length],[data31 length]);
}

I get this output:

data1 = 1751828;;;;;;;data2 = 254737

data30 = 1368455;;;;;;data31 = 387174;;;;;;

Why is data30 so much bigger than data2?

Jeffrey Bosboom
  • 13,313
  • 16
  • 79
  • 92
nice
  • 79
  • 1
  • 1
  • 7
  • You can improve your question by also stating why you are surprised; it's not completely clear whether (and why) you're expecting those numbers to be the same, or something else, or... (although I'm assuming you're expecting them to be similar since the net "image quality" is about the same). – Ben Zotto Mar 04 '15 at 03:48

1 Answers1

5

Because it still represents an image of that resolution stored with the least amount of data loss that JPEG allows.

Here's an (imperfect) analogy. Imagine taking a CD (full quality audio) and ripping it to a very low-quality MP3 file. That file would be very small, and sound terrible. Now burn that MP3 file onto a CD-R using iTunes. If you play that CD, it will still sound terrible, but that is a full size storage of the terrible sound data. Now rip that CD-R to the highest quality kind of MP3. Do you expect it to yield the same size as the low-quality MP3 with which you burned the CD? No, because you're asking iTunes to encode a full size sound signal at very high quality. You're doing a lot of work to "preserve" in high quality what happens to be a crummy sound data stream.

Same with your images. You are taking an original bitmap at some resolution X*Y. You are encoding it very lossily, which is designed to take up a small amount of disk space by throwing out a bunch of information. Then you are decoding that, back to a full X*Y size bitmap, which now has its own set of (different) complexities that emerged from the way it happened to be compressed. Then you are encoding that bitmap at very high quality. Which will preserve nearly all of its visible complexity but still be crummy to look at.

(You do see a material difference between your data1 and data30, which is the closest apples-to-apples comparison here. data1 is what happens when you keep as much information as JPEG allows. The drop in size to data30 is showing what you lost when you went through the step of encoding it into data2 first.)

Ben Zotto
  • 70,108
  • 23
  • 141
  • 204