I'm writing an iOS application using a separate audio engine which is built in C++. The idea is to have as much of the app as possible written in Swift, running the audio engine as a separate thread or process, and then let the user interface trigger communication with the audio engine.
How is this best achieved?
My first try was to add an intermediate Objective-C++ class (TestEngine) that handles triggering the appropriate C++ code. This Objective-C++ class I initiate in Swift like this:
// Initialize audio engine
let engine = TestEngine()
// Start engine in a new thread
NSThread.detachNewThreadSelector(NSSelectorFromString("startEngine"), toTarget: engine, withObject: nil)
The TestEngine class looks like this:
@implementation TestEngine
- (void) startEngine {
[[NSNotificationCenter defaultCenter]
addObserver:self
selector:@selector(selectorMethod)
name:@"testSelector"
object:nil];
NSLog(@"StartEngine done on thread:%@", [NSThread currentThread]);
}
- (void) selectorMethod {
NSLog(@"This is selector, on thread %@", [NSThread currentThread]);
}
- (void)dealloc
{
NSLog(@"Now deallocing");
}
@end
The problem now is that the object is deallocated right away, before it is able to receive any notifications. I have also tried looking into RunLoops, but haven't been able to find any easy (not too low-level) way to add observers to the RunLoop. Any ideas of how such a solution is best designed on iOS?