0

I am trying to monitor a stream of video output from a FireWire camera. I have created an Interface Builder interface with buttons and an NSImageView. While image monitoring is occurring within an endless loop, I want to:

  • change some camera parameters on the fly (gain, gamma, etc.)
  • tell the monitoring to stop so I can save an image to a file (set a flag that stops the while loop)

Using the button features, I have been unable to loop the video frame monitor, while still looking for a button press (much like using the keypressed feature from C.) Two options present themselves:

  1. Initiate a new run loop (for which I cannot get an autoreleasepool to function ...)
  2. Initiate an NSOperation - how do I do this in a way which allows me to connect with an Xcode button push?

The documentation is very obtuse about the creation of such objects. If I create an NSOperation as per the examples I've found, there seems to be no way to communicate with it with an object from Interface Builder. When I create an NSRunLoop, I get an object leak error, and I can find no example of how to create an autoreleasepool that actually responds to the RunLoop I've created. Nevermind that I haven't even attempted to choose which objects get sampled by the secondary run loop ...

Because Objective C is (obviously!) not my native tongue, I am looking for solutions with baby steps, sorry to say ... Thanks in advance

Rob
  • 415,655
  • 72
  • 787
  • 1,044
monty wood
  • 159
  • 1
  • 1
  • 9

2 Answers2

2

I've needed to do almost exactly the same as you, only with a continuous video display from the FireWire camera. In my case, I used the libdc1394 library to perform the frame capture and camera property adjustment for our FireWire cameras. I know you can also do this using some of the Carbon Quicktime functions, but I found libdc1394 to be a little easier to understand.

For the video capture loop, I tried a number of different approaches, from a separate thread that polls the camera and has locks around shared resources, to using one NSOperationQueue for interaction with the camera, and finally settled on using a CVDisplayLink to poll the camera in a way that matches the refresh rate of the screen.

The CVDisplayLink is configured using the following code:

CGDirectDisplayID   displayID = CGMainDisplayID();  
CVReturn            error = kCVReturnSuccess;
error = CVDisplayLinkCreateWithCGDisplay(displayID, &displayLink);
if (error)
{
    NSLog(@"DisplayLink created with error:%d", error);
    displayLink = NULL;
}
CVDisplayLinkSetOutputCallback(displayLink, renderCallback, self);  

and it calls the following function to trigger the retrieval of a new camera frame:

static CVReturn renderCallback(CVDisplayLinkRef displayLink, 
                               const CVTimeStamp *inNow, 
                               const CVTimeStamp *inOutputTime, 
                               CVOptionFlags flagsIn, 
                               CVOptionFlags *flagsOut, 
                               void *displayLinkContext)
{
    return [(SPVideoView *)displayLinkContext renderTime:inOutputTime];
}

The CVDisplayLink is started and stopped using the following:

- (void)startRequestingFrames;
{
    CVDisplayLinkStart(displayLink);    
}

- (void)stopRequestingFrames;
{
    CVDisplayLinkStop(displayLink);
}

Rather than using a lock on the FireWire camera communications, whenever I need to adjust the exposure, gain, etc. I change corresponding instance variables and set the appropriate bits within a flag variable to indicate which settings to change. On the next retrieval of a frame, the callback method from the CVDisplayLink changes the appropriate settings on the camera to match the locally stored instance variables and clears that flag.

Display to the screen is handled through an NSOpenGLView (CAOpenGLLayer introduced too many visual artifacts when updating at this rate, and its update callbacks ran on the main thread). Apple has some extensions you can use to provide these frames as textures using DMA for better performance.

Unfortunately, nothing that I've described here is introductory-level stuff. I have about 2,000 lines of code for these camera-handling functions in our software and this took a long time to puzzle out. If Apple could add the manual camera settings adjustments to the QTKit Capture APIs, I could remove almost all of this.

Brad Larson
  • 170,088
  • 45
  • 397
  • 571
0

If all you're trying to do is see/grab the output of a connected camera, the answer is probably neither.

Use QTKit's QTCaptureView. Problem solved. Want to grab a frame? Also no problem. Don't try to roll your own - QTKit's stuff is optimized and part of the OS. I'm pretty sure you can affect camera properties as you wanted but if not, plan B should work.

Plan b: Use a scheduled, recurring NSTimer to ask QTKit to grab a frame every so often ("how" linked above) and apply your image manipulations to the frame (maybe with Core Image) before displaying in your NSImageView.

Joshua Nozzi
  • 60,946
  • 14
  • 140
  • 135
  • There currently is no way to change the exposure, gain, etc. parameters of an attached FireWire camera via the QTKit Capture APIs (duplicate rdar://5760371 "Ability to set brightness, gain, etc. for cameras in QTKit Capture API" if you'd like this functionality). – Brad Larson Mar 15 '11 at 20:57
  • I have come across the following example at "Cocoa is my Girlfriend" using QTKit: [link]http://www.cimgf.com/2008/02/23/nsoperation-example/ This looks promising. I an also using the libdc1394 stuff, and I've gotten to where it looks like I've captured an image. I am having trouble accessing the data in a form I can use (translating a pointer to unsigned char into an array of 16-bit integers from a 14-bit camera, ultimately for saving as a tiff file ...) – monty wood Mar 17 '11 at 15:59
  • If you want absolute control, Brad's method is a sound approach, but I really do think QTKit is the way to go. Perfectly fine to use an NSOperation to do this but it's so quick, it might be unnecessary to use a separate thread. – Joshua Nozzi Mar 17 '11 at 16:28
  • Not sure I understand, though, why you're having trouble getting a TIFF representation of what QTKit gives you for a captured frame. Steps 12 and 13 of the frame grabber example I linked you to get you an NSImage; getting a tiff representation is as easy as asking it for a -TIFFRepresentation (an NSData instance) is as easy as calling -[myImage TIFFRepresentation] and NSData can easily be saved to a file using its own methods... – Joshua Nozzi Mar 17 '11 at 16:31