0

I want to develop an application for Mac OS X to record audio from one application.

I played around with Soundflower, but it only grabs the full system audio.
I know that I have to use a HAL plug-in. This plug-in is loaded from an application that uses Core Audio and then I can communicate with the plug-in to grab the audio.

My question is: How does such a plug-in look like? Are there examples on the internet? I have not found anything about this topic.

Paul Warkentin
  • 3,899
  • 3
  • 26
  • 35
  • To clarify here: You're looking to write something like `http://rogueamoeba.com/audiohijackpro/` and you want to know how they did it, right? – abarnert Nov 05 '12 at 20:48
  • 1
    PS, you may want to play more with Soundflower—between SoundflowerBed and Soundflower16, you can often get around the fact that some apps don't let you select an audio output. (Also, if you're doing this as a one-off for yourself, rather than an app to sell, it may be easier to inject code into the target app and just force it to select your Soundflower16 channel 3/4 output than to write your own plugin from scratch.) – abarnert Nov 05 '12 at 20:53
  • Yes, that's right. But how can I inject some code into an application that force it to select a Soundflower channel? Is there any source code on the Internet? – Paul Warkentin Nov 05 '12 at 21:17
  • 1
    Well, if it's a Cocoa app, it's easy; look at the F-Script injection service for how to inject, and then you just override stuff in the ObjC runtime. The alternative, which doesn't require Cocoa, is to use SIMBL to get your code injected and hook functions in the C level, which is complicated. It's definitely easier if you already know how to do it… but probably not so much if you don't. (There's also a really simple option: Set Soundflower as the default audio out, and set all your _other_ apps to use a different audio out. But this only works if you're only running configurable apps.) – abarnert Nov 05 '12 at 21:19
  • Two more options: First, what app are you trying to grab audio from? Many Core Audio apps already have a way to configure an output, or even do custom routing, in their preferences, which is obviously easiest if it's possible. Second, if you're just looking to save money, Piezo is half the price of Audio Hijack, and I think it does what you need. While you're there, look through rogueamoeba's blogs and release notes to see all the problems they've had dealing with each new OS version (and, often, Safari and QT version)… – abarnert Nov 05 '12 at 21:22
  • It's a Cocoa application. I checked out F-Script and it works! But now I'm searching for code to change the output device of a single application. My goal is to change it of my developed application. After that I can inject it into another one. I looked at the CAPlayThrough example of Apple, but it's not well documentated, which makes it difficult to understand how the output device is changed. I haven't found anything else. Is there another example on the internet? – Paul Warkentin Nov 11 '12 at 19:58

1 Answers1

4

Now that you've decided that using Cocoa injection is a feasible solution to your problem, let's start there.

What you need to do is find out how the ObjC classes in the app are setting up to play audio, and hook in to set a different AU in place of the default system out.

There are two options (besides writing your own custom AU from scratch, which you don't need to do). You can use AUHAL as the AU, and capture the data from AUHAL. This is a bit easier from the point of view of hooking things up, but it means you have to write the code that renderers and saves the audio. Or you can just hook in a save-to-file AU, which is a bit harder to hook up, but once you do it takes care of rendering automatically.

So, how do you hook things in? Well, most of the higher-level CA calls are written to just write to the current output. If the app is doing things that way, you just need to hook in at startup to find your replacement AU and set it as the current output, in place of the default. On the other hand, if the app is writing directly to an AU that it stores in a variable, you have to hook it to store your AU as a variable. And if it's building a graph of AUs, you either replace the default output, or stick yours in front of it, in the graph.

See TN2091 for some sample code fragments for most of the hard parts for most of the possibilities. It doesn't show you how to put them together, and it's got a lot more about setting inputs than outputs (because that's harder), and the terminology can get confusing, but if you read it carefully, you should be able to find the parts you need.

If you haven't yet built a simple AU host and AU plugin before, you really should take the time to work through the whole Audio Unit Development Fundamentals guide. (And if you don't think you really need to know all that to do something simple, you're wrong. Why CoreAudio is Hard explains half of the reason; the changes between OS X versions versions are the other half of the reason.)

You probably also want to look at CocoaDev's CoreAudioAndAudioUnitsTutorial page for a placeholder page for a complete tutorial that nobody's ever written, with links to a lot of useful stuff.

Meanwhile, if injecting the whole MTCoreAudio framework into the app is feasible, it comes with a ton of nice, complete samples. In fact, even if you aren't going to use the framework, it's worth reading the Overview documentation, and possibly the source code.

abarnert
  • 354,177
  • 51
  • 601
  • 671