0

I am rotating an image on the iPhone. It is a full size grayscale image (3264 by 2448 on the 4s). OpenGl appears to be roughly twice as fast as core graphics by my measurements, running about 1.22 seconds as opposed to 2.6 seconds. But this is not fast enough for my needs, I need sub-second rotation, if it is possible. (If not, we go to Plan B which involves rotating a subsection of the image, which is more elegant perhaps but has it's own issues.)

I should make clear that this only for internal processing of the image, not for display purposes.

I would like to make sure that I am doing this correctly and not making any obvious beginner's mistakes. Here is the code, I would appreciate any hints for improvements if possible.

Thank you, Ken

    void LTT_RotateGL(
                  unsigned char *src,
                  unsigned char *dst,
                  int nHeight,
                  int nWidth,
                  int nRowBytes,
                  double radians,
                  unsigned char borderColor)
{
    unsigned char *tmpSrc  = (unsigned char *)malloc( (nWidth * nHeight) );
    memcpy( tmpSrc, src, (nWidth * nHeight) );

    CGContextRef context2 = CGBitmapContextCreate(
                                                  tmpSrc, nWidth, nHeight, (size_t)8, nRowBytes,
                                                  CGColorSpaceCreateDeviceGray(),
                                                  kCGImageAlphaNone );

    CGImageRef imageRef2 = CGBitmapContextCreateImage( context2 );
    UIImage *image2     = [[UIImage alloc] initWithCGImage:imageRef2];
    GLuint width = CGImageGetWidth(image2.CGImage);
    GLuint height = CGImageGetHeight(image2.CGImage);
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
    void *imageData = malloc( height * width);
    CGContextRef context = CGBitmapContextCreate( imageData, width, height, 8, width, colorSpace, kCGImageAlphaNone );
    CGColorSpaceRelease( colorSpace );  
    CGContextTranslateCTM( context, 0, height - height );
    CGContextRotateCTM(context, -radians); 
    CGContextDrawImage( context, CGRectMake( 0, 0, width, height ), image2.CGImage );
    glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, width, height, 0, GL_LUMINANCE, GL_LUMINANCE, imageData);  
    CGContextRelease(context);
    memcpy( dst, imageData, (nWidth * nHeight) );
    free( tmpSrc );
   }
genpfault
  • 51,148
  • 11
  • 85
  • 139
user938797
  • 167
  • 3
  • 15
  • 1
    The above code doesn't actually use OpenGL for anything. You're rotating your image using Core Graphics, rendering that into your destination byte array, uploading that as a texture (which I hope you've bound before this), and then never rendering a quad with it or reading back the rendered pixels. All you're doing is copying the rotated bytes that Core Graphics gave you. Relying on Core Graphics for rotation will be a slower process. – Brad Larson Jul 24 '13 at 19:49
  • Brad, that is very interesting, and now that you point it out I see what you are saying, thank you. Can you please advise on where I might find the equivalent done in OpenGl? Or better yet, post code! Thank you, KEn – user938797 Jul 29 '13 at 02:16
  • 1
    Well, there is my little hobby project here: https://github.com/BradLarson/GPUImage . If you can go from raw RGBA bytes to raw RGBA bytes, that would be faster within this framework than going to and from a UIImage (the former avoids having to pass through Core Graphics for anything). The fastest approach is to upload once and chain operations on the GPU, if you want to do any kind of multistage processing with the same image. – Brad Larson Jul 29 '13 at 02:30
  • Thanks Brad, I'll check this out in the morning. We are working strictly in grayscale and likely are doing nothing but the rotation in GL, simply because we need to rotate a large image real fast. I'm still hoping the the hardware can even do it, that remains unclear. – user938797 Jul 29 '13 at 07:07

0 Answers0