2

my goal is to create a sampler instrument for iPhone/iOS.

The Instrument should play back sound files on different pitches/notes and it should have a volume envelope.

A volume envelope means, that the sounds volume is fading in when nit starts to play. I tried countless way on creating that. The desired way is to use a AVAudioEngine's AVPlayerNode, then process the individual samples of that node in realtime.

Unfortunately I had no success on that approach so far. Could you give me some pointers on how this works in iOS?

Thanks, Tobias

PS: I did not learn the Core Audio Framework. Maybe it is possible to access an AVAudioNodes Audio Unit to execute this job, but I had not the time to read into the Framework yet.

dave234
  • 4,793
  • 1
  • 14
  • 29
Tobias Schmidt
  • 371
  • 3
  • 16

2 Answers2

2

I looked at another post of yours I think AUSampler - Controlling the Settings of the AUSampler in Real Time is what you're looking for.

I haven't yet used AVAudioUnitSampler, but I believe it is just a wrapper for the AUSampler. To configure an AUSampler you must first make and export a preset file on your mac using AULab. This file is a plist which contains file references and sampler decay volume pitch cutoff and all of the good stuff that the AUSampler is built for. Then this file is put into your app bundle. You then create a directory named "Sounds", copy of all of the referenced audio samples into that folder and put it in your app bundle as well (as a folder reference). Then you create your audioGraph (or in your case AVAudioEngine) and sampler and load the preset from the preset file in your app bundle. It's kind of a pain. These links I'm providing are what I used to get up and running, but they are a little dated, if I where to start now I would definitely look into the AVAudioUnitSampler first to see if there are easier ways.

To get AULab go to Apple's developer downloads, select "Audio Tools for Xcode". Once downloaded just open the DMG and drag the folder anywhere (I drag it to my Applications folder). Inside is The AULab. Here is a technical note describing how to load presets, another technical note on how to change parameters (such as attack/decay) in real time, and here is a WWDC Video that walks you through the whole thing including the creation of your preset using AULab.

dave234
  • 4,793
  • 1
  • 14
  • 29
  • Unfortunately AU Lab seems to be broken at the moment. http://stackoverflow.com/questions/30539748/unable-to-save-performance-parameters-in-ausampler – Youngin Jun 11 '15 at 02:54
  • It is broken, but one workaround is to edit an existing "instrument connection" in the AULab to accomplish the same thing. – dave234 Jun 11 '15 at 06:14
  • I answer his [other question](http://stackoverflow.com/questions/29880528/swift-avfoundation-proper-volume-envelopes-for-avaudiounitsampler/30762268#30762268) with a workaround that allows you to add an attack connection. – dave234 Jun 11 '15 at 13:46
2

A more low-level way is to read the audio from the file and process the audio buffers.

You store the ADSR in an array or better, a mathematical function that outputs the envelope value based on the sound index you pass it (using interpolation). So the envelope maps to any sound's duration.

Then you multiply the audio sample with the returned envelope value to get the filtered sample.

One way would be to use the AVAudioNode and link a processing node to it.

some_id
  • 29,466
  • 62
  • 182
  • 304
  • How exactlly can I access and process the sound in the processing node? I could not find a way in the documentation. Thanks, Tobi! – Tobias Schmidt Jun 12 '15 at 19:06