I understand that when an Atlas is created, the images are:
a) rotated
b) trimmed of excess transparent pixels
c) color format may change, etc
So if I had an image that was 50 pixels tall but only the top of the image was a solid color and the bottom was fully transparent. This image would be stored in the atlas with a height of 25 pixels and the transparent pixels would be discarded).
What I'm having a hard time understanding is...
What SHOULD happen when you display this image to the screen from the atlas?
should I get back an exact copy of the original image (before it was reduced to 25 pixels in height)?
It seems in Xcode 6.x and below that is exactly what would happen:
- if I filled the entire screen with this image (50% solid color at the top, 50% trasnparent at the bottom) in xcode 6 I would see the top half of the screen as a solid color and the bottom would be transparent.
If I do the exact same thing using the latest SDK the top 25% of the screen is transparent, the middle 50% is my solid color and the bottom 25% is transparent.
I don't know if the transparent pixels are being added back to the image, if I'm actually dealing with the exact same image I'm now seeing in the atlas (cropped/trimmed of transparent pixels) or if the image I'm getting back from the atlas is my image with the transparent space equally returned to each side vs (being on one side or the other).
I would really like to understand what is happening (right now it seems like a bug).
I know a lot of people recommend using a 1% alpha pixel in the corners of transparent space to make sure the image is returned exactly how they want, but I don't think this is a good solution (makes atlas images bigger, pixel is visible at 1% if you look hard enough)
Can anyone tell me if this is a bug, if there would be a reason why all my images with transparent pixels under them now return the image sitting at the bottom of the width x height frame?