My App taps the iPhone inbuilt microphone and uses the frequency detected as a tuner.
The tuner is an observable object which needs to be mocked for testing.
I have set up a protocol to be able to switch between the real tuner (mic tap) and the mock and introduced a ViewModel in order to be able to observe the tuner and allow switching to the mock using the protocol.
In order to make the View update I had to add a dummy object (timer) to the Viewmodel so that it updates. Otherwise the tuner runs on the dispatch queue but does not update the view.
This dummy object runs continually in the init()
What I need to work out is if this is a reasonable way to do this or if there is a better way to force the view to update.
This is the code :
Protocol allowing switch between Tuner Conductor and Mock
protocol TunerConductorProtocol : ObservableObject, HasAudioEngine {
var published_pitch : Float { get }
}
TunerViewModel enables a protocol to be used to publish the Tuner Conductor or Mock as an observable object But it is not updated unless forced to (I have used a timer with repeat : true to do this)
class TunerViewModel : ObservableObject {
var conductor : any TunerConductorProtocol
@Published private var pitch : Float
var timer = Timer()
init (tunerConductor : any TunerConductorProtocol) {
conductor = tunerConductor
pitch = 0.0
conductor.start()
///This timer is only here to force the TunerViewModel to refresh its observable object
self.timer = Timer.scheduledTimer(withTimeInterval: 0.01, repeats: true) { timer in
self.pitch = 0
}
}
}
Observes a TunerViewModel that contains a Tuner Conductor
struct TunerView: View {
@StateObject var conductorVm : TunerViewModel
var body: some View {
VStack {
HStack {
Text("Frequency")
Spacer()
Text("\(conductorVm.conductor.published_pitch, specifier: "%0.1f")")
}.padding()
}
}
}
**This is the observable object **/ Observes the mic using a pitch tap supplied by Audiokit
class TunerConductor: TunerConductorProtocol{
var published_pitch : Float = 0.0
var amplitude : Float = 0.0
let engine = AudioEngine()
let initialDevice: Device
let mic: AudioEngine.InputNode
let tappableNodeA: Fader
let silence: Fader
var tracker: PitchTap!
init() {
guard let input = engine.input else { fatalError() }
guard let device = engine.inputDevice else { fatalError() }
initialDevice = device
mic = input
tappableNodeA = Fader(mic)
silence = Fader(tappableNodeA, gain: 0)
engine.output = silence
tracker = PitchTap(mic) { pitch, amp in
DispatchQueue.main.async {
self.update(pitch[0], amp[0])
print ("Running the real conductor")
}
}
tracker.start()
}
func update(_ pitch: AUValue, _ amp: AUValue) {
// Reduces sensitivity to background noise to prevent random / fluctuating data.
guard amp > 0.1 else { return }
published_pitch = pitch
amplitude = amp
}
}
func update(_ pitch: AUValue, _ amp: AUValue) {
// Reduces sensitivity to background noise to prevent random / fluctuating data.
guard amp > 0.1 else { return }
published_pitch = pitch
amplitude = amp
}
}
The Mock just increments the pitch forever, not using the mic
class MockTunerConductor: TunerConductorProtocol {
var published_pitch : Float = 0.0
var amplitude : Float = 0.0
let engine = AudioEngine() // Dummy engine, Not used, required to conform to protocol
var timer = Timer()
init () {
published_pitch = 50
amplitude = 1.3
self.timer = Timer.scheduledTimer(withTimeInterval: 1.0, repeats: true) { timer in
self.published_pitch += 1.0}
}
}
ContentView can publish either the Mock or the real TunerConductor or both
struct ContentView: View {
var body: some View {
VStack {
TunerView(conductorVm: TunerViewModel(tunerConductor: TunerConductor()))
//TunerView(conductorVm: TunerViewModel(tunerConductor: MockTunerConductor()))
}
}
}
Is there a better way to do this.
Here is the project on github: https://github.com/rickhardy/TunerEnvironmentObject
The project needs some setup for the target (should be already done in the version on github):
// To make this run, The following settings are required. For the Target
// Info
// Add:
// Privacy - Microphone Usage Description : « Add a description »
// Packages required:
// AudioKit
// AudioKitEX
// SoundpipeAudioKit
//Under info for the target
//Add
// Application Scene Manifest
// -Enable Multiple Windows : No
// -Scene Configuration
// --Application Session Role
// ---Item 0
// ----Configuration Name : Default Configuration
// ----Delegate Class Name: $(PRODUCT_MODULE_NAME).SceneDelegate
// Remove the other categories
// Build settings:
// Add other linker flags
// -lstdc++
// There must be a microphone for this to work.
App delagate required to make the tuner run
import SwiftUI
import AVFoundation
import AudioKit
@UIApplicationMain
class AppDelegate: UIResponder, UIApplicationDelegate {
func application(_ application: UIApplication,
didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
do {
Settings.bufferLength = .short
try AVAudioSession.sharedInstance().setPreferredIOBufferDuration(Settings.bufferLength.duration)
try AVAudioSession.sharedInstance().setCategory(.playAndRecord,
options: [.defaultToSpeaker, .mixWithOthers, .allowBluetoothA2DP]
)
try AVAudioSession.sharedInstance().setActive(true)
} catch let err {
print(err)
}
return true
}
}
class SceneDelegate: UIResponder, UIWindowSceneDelegate {
var window: UIWindow?
func scene(_ scene: UIScene,
willConnectTo session: UISceneSession,
options connectionOptions: UIScene.ConnectionOptions) {
if let windowScene = scene as? UIWindowScene {
let window = UIWindow(windowScene: windowScene)
window.rootViewController = UIHostingController(rootView: ContentView())
self.window = window
window.makeKeyAndVisible()
}
}
}