4

I am getting an application crash of my app when I am using mic in my case Microsoft Teams on the background and trying to record an audio inside of my app.

Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'required condition is false: IsFormatSampleRateAndChannelCountValid(format)'

Please refer to the code below:

func startRecording() {
        
        // Clear all previous session data and cancel task
        if recognitionTask != nil {
            recognitionTask?.cancel()
            recognitionTask = nil
        }

        // Create instance of audio session to record voice
        let audioSession = AVAudioSession.sharedInstance()
        do {
            try audioSession.setCategory(AVAudioSession.Category.record, mode: AVAudioSession.Mode.measurement, options: AVAudioSession.CategoryOptions.defaultToSpeaker)
            try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
        } catch {
            print("audioSession properties weren't set because of an error.")
        }
    
        self.recognitionRequest = SFSpeechAudioBufferRecognitionRequest()

        let inputNode = audioEngine.inputNode

        guard let recognitionRequest = recognitionRequest else {
            fatalError("Unable to create an SFSpeechAudioBufferRecognitionRequest object")
        }

        recognitionRequest.shouldReportPartialResults = true

        self.recognitionTask = speechRecognizer?.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in

            var isFinal = false

            if result != nil {

                self.textField.text = result?.bestTranscription.formattedString
                isFinal = (result?.isFinal)!
            }

            if error != nil || isFinal {

                self.audioEngine.stop()
                inputNode.removeTap(onBus: 0)

                self.recognitionRequest = nil
                self.recognitionTask = nil

                self.micButton.isEnabled = true
            }
        })
    
        let recordingFormat = inputNode.outputFormat(forBus: 0)

    
        inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in
            self.recognitionRequest?.append(buffer)
        }

        self.audioEngine.prepare()

        do {
            try self.audioEngine.start()
        } catch {
            print("audioEngine couldn't start because of an error.")
        }

        self.textField.text = ""
    }

I am pretty sure that the problem is somewhere here, but not sure how to fix it.

let recordingFormat = inputNode.outputFormat(forBus: 0)
        inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in
            self.recognitionRequest?.append(buffer)
        }
Raptor
  • 53,206
  • 45
  • 230
  • 366
  • OKay so I manage to fix the crash by adjusting the code here let recordingFormat = inputNode.outputFormat(forBus: 0) if recordingFormat.sampleRate == 0.0 { return } inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in self.recognitionRequest?.append(buffer) } But I am still confused on how to throw an error in UIControl and is it possible at all? – Nursultan Yelemessov Nov 21 '22 at 11:37

2 Answers2

0

So the app was crashing because I didn't applied the correct microphone channel.

  1. Create a protocol on the top of your code after the imports to represent an error in the file where u have:

    let audioEngine = AVAudioEngine()
    
    protocol FeedbackViewDelegate : AnyObject {
        func showFeedbackError(title: String, message: String)
        func audioDidStart(forType type : FeedbackViewType)
    }
    
  2. In the beginning add the return of boolean in your function

    func startRecording() -> Bool {
    }
    
  3. Add this line of code in the sharedInstance catch part (it will prevent crash)

    let audioSession = AVAudioSession.sharedInstance()
    do {
        try audioSession.setCategory(AVAudioSession.Category.playAndRecord, mode: AVAudioSession.Mode.measurement, options: AVAudioSession.CategoryOptions.defaultToSpeaker)
        try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
    } catch {
        print("audioSession properties weren't set because of an error.")
        delegate?.showFeedbackError(title: "Sorry", message: "Mic is busy")
        return false
    }
    

    Return above will prevent code to be executed.

  4. Create an extension in the view controller

    extension codeFileName : FeedbackViewDelegate {
        func showFeedbackError(title: String, message: String) {
    
        }
    }
    

    (there is a millions of examples in the web) inside of the function you can create an alert and in "in" part use self

Phil Dukhov
  • 67,741
  • 15
  • 184
  • 220
  • Can you share your code? I'm facing the same issue and couldn't solve it with your given solution. – tamaramaria Nov 23 '22 at 17:05
  • Try to go trough the code I just posted and If u have the same issue get back to me here. @tamaramaria – Nursultan Yelemessov Nov 23 '22 at 17:22
  • I'm still getting the same error message `required condition is false: IsFormatSampleRateAndChannelCountValid(format)` – tamaramaria Nov 24 '22 at 08:56
  • try to apply this if recordingFormat.sampleRate == 0.0 { return }. Between this let recordingFormat = inputNode.outputFormat(forBus: 0) inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in self.recognitionRequest?.append(buffer) } @tamaramaria – Nursultan Yelemessov Nov 24 '22 at 09:27
  • Now I don't get an error back, which is perfect, but it also doesn't record my voice – tamaramaria Nov 24 '22 at 10:10
  • yeah I don't have a solution on how to force an audio recording while microphone is busy but above solution will throw an error and will show you how to show an error message on the front end so it will at least inform the user. @tamaramaria – Nursultan Yelemessov Nov 24 '22 at 11:54
  • thank you for the help! Will also post hopefully my solution here – tamaramaria Nov 24 '22 at 14:36
0
fileprivate let NibName = "FeedbackView"
protocol FeedbackViewDelegate : AnyObject {
    func showFeedbackError(title: String, message: String)
    func audioDidStart(forType type : FeedbackViewType)
}

enum FeedbackViewType {
    
    case feedbackView, rootcauseView, suggestionView, actionView
    
}

class FeedbackView: UIControl, ViewLoadable, SFSpeechRecognizerDelegate {
    
    @IBOutlet weak var textField: UITextField!
    
    static var nibName: String = NibName
    
    var feedbackViewType : FeedbackViewType = .feedbackView
    
    @IBOutlet var contentView: UIView!
    
    @IBOutlet weak var micButton: UIButton!
    
    @IBOutlet weak var micView: DefaultCardView!
    
    @IBOutlet weak var micImageView: UIImageView!
    
    weak var delegate : FeedbackViewDelegate?
    var allowTextEntry = true
    
    let speechRecognizer        = SFSpeechRecognizer(locale: Locale(identifier: "en-US"))

       var recognitionRequest      : SFSpeechAudioBufferRecognitionRequest?
       var recognitionTask         : SFSpeechRecognitionTask?
       let audioEngine             = AVAudioEngine()
    
    override init(frame: CGRect) {
        super.init(frame: frame)
        commonInit()
    }
    
    required public init?(coder aDecoder: NSCoder) {
        super.init(coder: aDecoder)
        commonInit()
    }
    
    init() {
        super.init(frame: CGRect.zero)
        commonInit()
    }
    
    private func commonInit() {
        Bundle(for: type(of: self)).loadNibNamed(NibName, owner: self, options: nil)
        backgroundColor = .clear
        addSubview(contentView)
        contentView.frame = self.bounds
        contentView.autoresizingMask = [.flexibleHeight, .flexibleWidth]
      
    }
    
    func configure(text: String, placeholder:String, contentType: UITextContentType,keyboardType:UIKeyboardType) {
        
        print("Did configure keyboard")
        self.textField.textContentType = contentType
        self.textField.isSecureTextEntry = (contentType == .password)
        self.textField.keyboardType = keyboardType
        self.textField.delegate = self
        self.textField.placeholder = placeholder
        if(!text.isEmpty) {
            self.textField.text = text
        }
    }
    
    
    @IBAction func btnStartSpeechToText(_ sender: UIButton) {
//        allowTextEntry = false
        if audioEngine.isRunning {
            let audioText = textField.text
                  self.audioEngine.stop()
            DispatchQueue.main.asyncAfter(deadline: .now() + 0.2) {
                self.textField.text = audioText
//                self.allowTextEntry = true
            }
            textField.text = audioText
                  self.micButton.isEnabled = true
                  self.micImageView.image = UIImage(named: "mic")
              } else {
                  print("Audio did start")
                  self.delegate?.audioDidStart(forType: self.feedbackViewType)
                  self.setupSpeech()
                  if self.startRecording() {
                      self.micImageView.image = UIImage(named: "micRed")

                  }
              }
    }
    
    func stopRecording() {
//        allowTextEntry = false
        let audioText = textField.text
        self.audioEngine.stop()
        self.recognitionRequest?.endAudio()
        DispatchQueue.main.asyncAfter(deadline: .now() + 0.2) {
            self.textField.text = audioText
//            self.allowTextEntry = true
        }
        self.micButton.isEnabled = true
        self.micImageView.image = UIImage(named: "mic")
    }
    
    func setupSpeech() {
        
//           self.micButton.isEnabled = false
           self.speechRecognizer?.delegate = self

           SFSpeechRecognizer.requestAuthorization { (authStatus) in

               var isButtonEnabled = false

               switch authStatus {
               case .authorized:
                   isButtonEnabled = true

               case .denied:
                   isButtonEnabled = false
                   print("User denied access to speech recognition")

               case .restricted:
                   isButtonEnabled = false
                   print("Speech recognition restricted on this device")

               case .notDetermined:
                   isButtonEnabled = false
                   print("Speech recognition not yet authorized")
               }

               OperationQueue.main.addOperation() {
//                   self.micButton.isEnabled = isButtonEnabled
               }
           }
       }
    
//    func audioInputIsBusy(recordingFormat: AVAudioFormat) -> Bool {
//        guard recordingFormat.sampleRate == 0 || recordingFormat.channelCount == 0 else {
//            return false
//        }
//        return true
//    }
    
    func startRecording() -> Bool {
            
            // Clear all previous session data and cancel task
            if recognitionTask != nil {
                recognitionTask?.cancel()
                recognitionTask = nil
            }

            // Create instance of audio session to record voice
            let audioSession = AVAudioSession.sharedInstance()
            do {
                try audioSession.setCategory(AVAudioSession.Category.playAndRecord, mode: AVAudioSession.Mode.measurement, options: AVAudioSession.CategoryOptions.defaultToSpeaker)
                try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
            } catch {
                print("audioSession properties weren't set because of an error.")
                delegate?.showFeedbackError(title: "Sorry", message: "Mic is busy")
                return false
            }
        
            self.recognitionRequest = SFSpeechAudioBufferRecognitionRequest()

            let inputNode = audioEngine.inputNode

            guard let recognitionRequest = recognitionRequest else {
                fatalError("Unable to create an SFSpeechAudioBufferRecognitionRequest object")
            }

            recognitionRequest.shouldReportPartialResults = true

            self.recognitionTask = speechRecognizer?.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in

                var isFinal = false

                if result != nil {

                    self.textField.text = result?.bestTranscription.formattedString
                    isFinal = (result?.isFinal)!
                }

                if error != nil || isFinal {

                    self.audioEngine.stop()
                    inputNode.removeTap(onBus: 0)
                    self.recognitionRequest = nil
                    self.recognitionTask = nil
                    self.micButton.isEnabled = true
                }
            })
        
            let recordingFormat = inputNode.outputFormat(forBus: 0)
            inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in
                self.recognitionRequest?.append(buffer)
            }

            self.audioEngine.prepare()

            do {
                try self.audioEngine.start()
            } catch {
                print("audioEngine couldn't start because of an error.")
                delegate?.showFeedbackError(title: "Sorry", message: "Your microphone is used somewhere else")
                return false
            }

            self.textField.text = ""
        return true
        }
    
    func speechRecognizer(_ speechRecognizer: SFSpeechRecognizer, availabilityDidChange available: Bool) {
        if available {
            self.micButton.isEnabled = true
        } else {
            self.micButton.isEnabled = false
        }
    }
    

}

extension FeedbackView: UITextFieldDelegate {
    
    func textFieldShouldReturn(_ textField: UITextField) -> Bool {
        self.endEditing(true)
        return false
    }
    
    func textField(_ textField: UITextField, shouldChangeCharactersIn range: NSRange, replacementString string: String) -> Bool {
        return allowTextEntry
    }
}