2

I have created an application which is able to get a request of what the speaker says using

https://stream.watsonplatform.net/speech-to-text/api/v1/

however when I apply

https://stream.watsonplatform.net/speech-to-text/api/v1/recognize?model=en-US_BroadbandModel&speaker_labels=true

it does not replay with any data

I have tried creating credentials using json

{
"profile": "low_latency", 
"part_content_type": "audio/flac", 
"word_alternatives_threshold": null, 
"preserveAdaptation": false, 
"disableBase": false, 
"word_confidence": false, 
"grammarId": null, 
"amCustomModelPath": null, 
"inactivity_timeout": 30, 
"grammars": [], 
"debugStats": false, 
"timestamps": false, 
"keywords": [], 
"customModelPath": null, 
"max_alternatives": 1, 
"enabledGrammarIds": "", 
"smart_formatting": false, 
"keywords_threshold": null, 
"firstReadyInSession": false, 
"weights": null, 
"speaker_labels": true, 
"action": "recognize", 
"profanity_filter": true
}

I have also tried creating 2 services, 1 for the speech reg. 2 the other for the speaker reg.

The application then gets the data of the speech reg. (no speaker reg. data returned)

the application is based on the github example by IBM https://github.com/watson-developer-cloud/android-sdk

I know that the speaker reg. requires a flacc audio file. Would I maybe need to add some extra code to send this request?

Phantômaxx
  • 37,901
  • 21
  • 84
  • 115
sonic18
  • 61
  • 1
  • 4

0 Answers0