0

I have this model.

In Mac, when an image is input, it detects objects and represents what it is.

https://i.stack.imgur.com/qdFBr.png

However, this model has four outputs.

I think the first output is the result.

So I converted the results into an image in Python as follows and saved the model as mlmodel.

import coremltools.proto.FeatureTypes_pb2 as ft

output = spec.description.output[0]
output.type.imageType.colorSpace = ft.ImageFeatureType.GRAYSCALE
output.type.imageType.height = 416
output.type.imageType.width = 416

In Swift, the converted results are stored in this form.

lazy var var_944: CVPixelBuffer = {
        [unowned self] in return self.provider.featureValue(for: "var_944")!.imageBufferValue
    }()!

After putting an image in the input, convert the output CVPixelBuffer to UIImage, and put UIImage in ImageView, no image appears.

Does anyone know the solution?

(Please understand that I used Papago because I am not good at English.)

Mammam
  • 1
  • Why you setup ft.ImageFeatureType.GRAYSCALE when output: 1, 3, 640, 640. Need setup: ft.ImageFeatureType.ColorSpace.Value('RGB') And sizes: 640x640 – Dmytro Hrebeniuk Apr 22 '22 at 17:55
  • my Model's output is MLMultiarray([1x25200x85). So, I can't change the image to RGB. – Mammam Apr 24 '22 at 16:32
  • Oh, yes it's yolo5s. Well then you can use article from this guy: https://rockyshikoku.medium.com/convert-yolov5-to-coreml-also-add-a-decode-layer-113408b7a848 – Dmytro Hrebeniuk Apr 24 '22 at 16:38

0 Answers0