It's failing because before it attempts to determine features, it will try and guess what the language is. Setting the language will prevent this.
For example:
question = 'hmmmm nawa ohh wen am I gona win ds tin'
f = [
features.Categories(),
features.Concepts(),
features.Emotion(),
features.Entities(),
features.Relations(),
features.SemanticRoles(),
features.Sentiment()
]
r = nlu.analyze(text=question, features=f, language='en')
print(json.dumps(r, indent=2))
Outputs this:
{
"sentiment": {
"document": {
"score": 0.0,
"label": "neutral"
}
},
"semantic_roles": [
{
"subject": {
"text": "I"
},
"sentence": "hmmmm nawa ohh wen am I gona win ds tin",
"object": {
"text": "ds tin"
},
"action": {
"verb": {
"text": "win",
"tense": "present"
},
"text": "win",
"normalized": "win"
}
}
],
"relations": [],
"language": "en",
"entities": [],
"emotion": {
"document": {
"emotion": {
"sadness": 0.193275,
"joy": 0.309168,
"fear": 0.167981,
"disgust": 0.06316,
"anger": 0.130959
}
}
},
"concepts": [],
"categories": [
{
"score": 0.899547,
"label": "/art and entertainment"
},
{
"score": 0.365657,
"label": "/hobbies and interests/reading"
},
{
"score": 0.189432,
"label": "/art and entertainment/movies and tv/movies"
}
]
}
It's not proper English though, so I wouldn't expect the results to be good.
You can see the supported language features here:
https://www.ibm.com/watson/developercloud/doc/natural-language-understanding/index.html#supported-languages