0

I am doing some semantic segmentation of people using DeepLabV3 mlmodel. The output after prediction is 513*513 as an MLMultiArray. Currently I am resizing my camera output to this size to apply the segmented array.

How can I resize the MLMultiArray to match my camera texture size?

if let observations = request.results as? [VNCoreMLFeatureValueObservation],
        let segmentationmap = observations.first?.featureValue.multiArrayValue {
        
        // row - 513, col - 513
        guard let row = segmentationmap.shape[0] as? Int,
            let col = segmentationmap.shape[1] as? Int else {
                return
        }
}
Andy Jazz
  • 49,178
  • 17
  • 136
  • 220
Arun
  • 222
  • 1
  • 10

1 Answers1

0

Here's some demo code for using DeepLab V3 in an iOS app: https://github.com/hollance/SemanticSegmentationMetalDemo

Matthijs Hollemans
  • 7,706
  • 2
  • 16
  • 23
  • This is exactly what I needed, I had completed till segmentation part and was struggling with texture heights, Thanks a lot. – Arun Dec 14 '20 at 11:07