2

I was looking at a WWDC video from 2011 about UIKit rendering and there is this section which speaks how UIImageView is more efficient than using drawRect. Here is an example the session shows:

enter image description here

I am not so clear on why stretching an image to 320x200 would take 250 additional Kbs using drawRect but not using UIImageView? Wouldn't UIImageView need the same number of additional pixels to resize and render the image?

Can someone please explain?

Hetal Vora
  • 3,341
  • 2
  • 28
  • 53
  • Sounds as if drawRect used a memory bitmap buffer (which is then used as a texture of the gl-unit) whereas UIImageView directly poses as a texture for the gl-hardware. – Till Jul 28 '12 at 15:51

1 Answers1

0

I was in the audience, and I believe the point was that if you use Quartz to create the new image, and then draw that image, it will have to first render the image into a context then draw that context. If you use the UIImageView methods to slice a small image and then render that, iOS then knows to tile a small image multiple times, so the one huge context (CGImage) is never created.

David H
  • 40,852
  • 12
  • 92
  • 138