We need to build a custom model with a lot of already phonemically transcribed custom vocabulary, but the current API for specifying custom words has no published option for specifying a phonemic string rather than a manually generated, ad-hoc "sounds_like" orthographic string. Since we have not been able to find any reliable tools for generating equivalent "sounds like" strings by rule from a phoneme string, this is a real barrier to us being able to use the IBM speech-to-text engine successfully.
Is there is an accepted phonetic/phonemic alphabet and available API mechanism for specifying a phoneme string rather than another orthography to indicate what custom words sound like when adding them to a custom model via the IBM cloud speech-to-text API? (i.e. an analog to the IPA and mechanisms for using it in IBMs text-to-speech API?)
(Alternatively, does IBM or anyone out there have a good tool for converting a phoneme sequence into an orthography guaranteed to be reconverted back to the same phoneme string by their ASR engine?)