0

I am trying to use SFSpeechRecognizer to transcribe spoken commands in my app. The commands are only one word. The SFSpeechAudioBufferRecognitionRequest (from microphone tap) can take a few seconds to ultimately come back with an accurate transcription of that word, but that is fine for my use case.

However, in my use case, a new command may come in before those few seconds have elapsed. How can I start processing a new request while still allowing the previous request to continue refining its result (i.e., without prematurely stopping the previous request)?

I have googled endlessly, and there seems to be no documentation on simultaneous/concurrent requests from microphone input.

Any thoughts would be greatly appreciated!

BSohl
  • 101
  • 5

1 Answers1

0

Something that I've been able to accomplish with iOS 13 is running two SFSpeechRecognitionTasks for two different SFSpeechAudioBufferRecognitionRequest simultaneously by having one with requiresOnDeviceRecognition set to true and one with requiresOnDeviceRecognition set to false.

I found that you can't run two on-device requests simultaneously, and you also can't run two off-device requests simultaneously - but by having one required to be on-device, you can be transcribing/processing one while listening and transcribing with the other.

  • Interesting, I'll give that a try. Thanks! Though requiring off-device somewhat defeats the purpose. I can accomplish what I need by doing everything off-device with libraries from Azure, Google, etc. But I was hoping to take advantage of everything being on-device from Apple. – BSohl Dec 13 '19 at 06:04