I want to develop an iOS application which generates triggers and control signals to control external hardware. For this reason I’m thinking to use the AudioKit framework to generate these signals and to route them towards specific output channels (which are greater than 2). I tried also Juce, which gives me more flexibility in handling buffers and attaching them to output channel. The perfect setup for me would be write the DSP code in C++ and the UI code in Swift.
Would it be possible to do it with AudioKit or should I go for a Juce / Core Audio application? Any reference to do this in AudioKit would be highly appreciated.