0

I initialize my QCRenderer like this:

let glPFAttributes:[NSOpenGLPixelFormatAttribute] = [
    UInt32(NSOpenGLPFABackingStore),
    UInt32(0)
]
let glPixelFormat = NSOpenGLPixelFormat(attributes: glPFAttributes)
if glPixelFormat == nil {
    println("Pixel Format is nil")
    return
}
let openGLView = NSOpenGLView(frame: glSize, pixelFormat: glPixelFormat)
let openGLContext = NSOpenGLContext(format: glPixelFormat, shareContext: nil)
let qcRenderer = QCRenderer(openGLContext: openGLContext, pixelFormat: glPixelFormat, file: compositionPath)

Further down in the code, I call renderAtTime like this:

if !qcRenderer.renderAtTime(frameTime, arguments: nil) {
    println("Rendering failed at \(frameTime)s.")
    return
}

which always produces this error message:

2014-10-30 15:30:50.976 HQuartzRenderer[3996:692255] *** Message from <QCClear = 0x100530590 "Clear_1">:
OpenGL error 0x0506 (invalid framebuffer operation)
2014-10-30 15:30:50.976 HQuartzRenderer[3996:692255] *** Message from <QCClear = 0x100530590 "Clear_1">:
Execution failed at time 0.000
2014-10-30 15:30:50.976 HQuartzRenderer[3996:692255] *** Message from <QCPatch = 0x100547860 "(null)">:
Execution failed at time 0.000
Rendering failed at 0.0s.

The Quartz Composition is just a simple GLSL shader which runs just fine in Quartz Composer. Here's a screenshot of the Quartz Composer window: Quartz Composition

There's not much on the internet on this that I could find. I hope someone here knows something that might help.

By the way, I know that I could just initialize the QCRenderer like

let qcRenderer = QCRenderer(offScreenWithSize: size, colorSpace: CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB), composition: qcComposition)

but I want to take advantage of my GPU's multisampling capabilities to get an antialiased image. That's gotta be more performant than rendering at 4x size, then manually downsizing the image.

Edit: Changed pixel format to

let glPFAttributes:[NSOpenGLPixelFormatAttribute] = [
    UInt32(NSOpenGLPFAAccelerated),
    UInt32(NSOpenGLPFADoubleBuffer),
    UInt32(NSOpenGLPFANoRecovery),
    UInt32(NSOpenGLPFABackingStore),
    UInt32(NSOpenGLPFAColorSize), UInt32(128),
    UInt32(NSOpenGLPFADepthSize), UInt32(24),
    UInt32(NSOpenGLPFAOpenGLProfile),
    UInt32(NSOpenGLProfileVersion3_2Core),
    UInt32(0)
]
Peter W.
  • 2,323
  • 4
  • 22
  • 42
  • Invalid Framebuffer Operation occurs in OS X on occasion if you wind up trying to use certain GL 3 functions in a GL 2.1 pixel format. It'd be helpful if you listed a few of your GL strings (e.g. `GL_VERSION` and `GL_RENDERER`) at run-time along with the full pixel format you're trying to use. – Andon M. Coleman Oct 30 '14 at 21:43
  • The pixel format above is the full one. I have tried them all, though, no matter which properties I set, the errors always occur. I've added renderer information to the OP. – Peter W. Oct 30 '14 at 21:49
  • That render information is not relevant to this question, because that's a separate program. I was referring to the version that your very limited pixel format attribute list is getting you. You'd need to call `glGetString (GL_VERSION)`, etc. to get that. – Andon M. Coleman Oct 30 '14 at 21:52
  • I've added that to the attribute list. Trying to render images with the pixel format now yields these errors: "*** CGLCreateContext() called from "_CreateGLContext" returned error 10009 ("invalid share context")" and "-[QCCGLContext minimalSharedContextForCurrentThread]: Inconsistent state" – Peter W. Oct 30 '14 at 21:59
  • Is there any reason you're trying to use 128-bit color? That's not hardware accelerated unless you're using a floating-point color buffer (in which case each color channel is 32-bit -- but you'd need to have `NSOpenGLPFAColorFloat` in your attribute list, which is not there). I'd go with 32-bit ***total*** for sanity ;) Go ahead and remove the OpenGL profile stuff from the pixel format attribute list and see if 32-bit color changes anything. – Andon M. Coleman Oct 30 '14 at 22:05
  • Basically, I want to know if I can make QRRenderer render deep color images for post processing purposes. Also, as far as I'm aware, passing X for NSOpenGLPFAColorSize means X-bit color depth for one pixel (= X/3 for one channel). Changed the values from 128 and 24 to 96 and 32 because apparently I forgot how to math until now, still getting the same error. – Peter W. Oct 30 '14 at 22:11

0 Answers0