-1

In the depths of Siri, it's churning out confidence scores for its dictation. I've accessed this on ios (via simulator on my mac), by tinkering with a package that connects Siri with react native.

It returns something like

{"value": [[{"confidence": .98, "substring": "just"} , {"confidence": .98, "substring": "a"} ,{ "confidence": .80, "substring": "test"} ]]}

Now I have a bunch of files in my mac that I want transcribed -- with confidence scores, as above. It would really be best to do this with Siri.

It seems like my options are

  1. make a little RN app that pulls in audio files from my mac, plays them, transcribes them, and saves the results to csvs (but how can I get the iOS simulator to interact with my files??)

  2. tinker with the python package that accesses macbook's speech to text capabilities -- which theoretically seems simpler, but I can't understand how it works

  3. give up and use Google's API (I'd rather not, for various reasons)

  4. something else???

How would you tackle this? Has anyone done anything like this?

litmuz
  • 71
  • 8

0 Answers0