0

I am trying to adapt the sample image-classification Android project available at

https://github.com/Azure-Samples/cognitive-services-android-customvision-sample

for an exported Custom vision Object Detection model, but it's not very clear what the structure of the output tensor is - since it includes bounding boxes etc.

I've also tried to convert to tensorflow lite and drop the model into the "sushi detector" iOS project at

https://medium.com/@junjiwatanabe/how-to-build-real-time-object-recognition-ios-app-ca85c193865a

but again it's not clear what the output structure is, nor whether it conforms to the tf API:

https://www.tensorflow.org/lite/demo_ios

There are some python samples when exporting the tf bundle but I am not sure how to convert to Java/swift/Objective C - see e.g.

https://stackoverflow.com/a/54886689/1021819

Thanks for all help.

jtlz2
  • 7,700
  • 9
  • 64
  • 114
  • PS Moderators: I know that this is service specific, but Azure is making use of stackoverflow for technical questions. Also: If the question (which is in the title) is unclear, please suggest constructive improvements rather than simply downvoting. Thanks! – jtlz2 Feb 28 '19 at 06:28

2 Answers2

0

If you unzip the exported model zip file, you can find a python folder within it. It contains sample codes in python showing how the model output should be parsed.

enter image description here

Ping Jin
  • 520
  • 4
  • 8
0

A little late, but nevertheless. This is the output of an object detection .tflite model from Custom Vision. It has a single output tensor.

enter image description here

Iorek
  • 571
  • 1
  • 13
  • 31