I'm considering building an app that would make heavy use of a flood fill / paint bucket feature. The images I'd be coloring are simply like coloring book pages; white background, black borders. I'm debating which is better to use UIImage
(by manipulating pixel data) or drawing the images with Core Graphics
and changing the fill color on touch.
With UIImage
, I'm unable to account for retina images properly; it destroys the image when I write the context into a new UIImage
, but I can probably figure out. I open to tips though...
With CoreGraphics
, I have no idea how to calculate which shape to fill when a user touches an area and then actually filling that area. I've looked but I have not turned up a successful search.
Overall, I believe the optimal solution is using CoreGraphics
, since it'll be lighter overall and I won't have to keep several copies of the same image for different sizes.
Thoughts? Go easy on me! It's my first app and first SO question ;)