The documentation shows that only concepts are returned for custom trained models:
{
"status": {
"code": 10000,
"description": "Ok"
},
"outputs": [
...,
"created_at": "2016-11-22T16:59:23Z",
"model": {
...
},
"model_version": {
...
}
}
},
"input": {
"id": "e1cf385843b94c6791bbd9f2654db5c0",
"data": {
"image": {
"url": "https://s3.amazonaws.com/clarifai-api/img/prod/b749af061d564b829fb816215f6dc832/e11c81745d6d42a78ef712236023df1c.jpeg"
}
}
},
"data": {
"concepts": [
{
...
},
Whereas pre-trained models such as demographic and face return regions with the x/y location in the image.
If I want to detect WHERE in the image the concept is predicted for my custom models. Is my only option to split the image into a grid and submit as bytes? This seems counter-productive as this would incur additional lookups.