Questions tagged [apple-vision]

Apple Vision is a high-level computer vision framework to identify faces, detect and track features, and classify images, video, tabular data, audio and motion sensors data.

Apple Vision framework performs face and face landmark detection on input images and video, barcode recognition, image registration, text detection, and, of course, feature tracking. Vision API allows the use of custom CoreML models for tasks like classification or object detection.

205 questions
0
votes
0 answers

How to properly place SCNNode on top of the QR Code?

I want to detect QR codes in the vertical plane and place a node on top of the detected QR. For QR detection, I used Vision framework and Arkit to place the nodes as below code. Whenever placing the node, it is not attached to the QR code and placed…
0
votes
0 answers

CoreML performance cost of Vision Framework vs. CVPixelBuffer?

There are two options for passing images into CoreML models: Pass in a CGImage to Vision Framework with its niceties Pass in a CVPixelBuffer directly Is there any data on the memory and processing overhead associated with using the Vision…
Nate Lowry
  • 391
  • 3
  • 9
0
votes
1 answer

Image recognition from photos library

I need to create a neural network that will be trained using a set of pictures, for example 1000 pictures. Then I want this network to be able to take a as an input a video from camera and detect if it sees one of these pictures - but not on entire…
0
votes
1 answer

How to create MLFeatureProvider class for vision framework

I am new to CoreML, and am having difficulties with turning a MLMultiArray (named modelInput) into the required type MLFeatureProvider to feed as a parameter when using myMLModel.prediction(from: modelInput). The error reads: Argument type…
The Swift Coder
  • 390
  • 3
  • 13
0
votes
1 answer

Value of type UIImage has no member cgImageOrientation

I am new coreml and I was reading a book I came across a code snippet and when I tried to run it I got the above error , I Googled but got no luck , here is my code import UIKit import Vision extension UIImage { func…
Yuvraj Agarkar
  • 125
  • 1
  • 10
0
votes
1 answer

Apply translation to root node does not work as expected for the second time onwards

This is a continuation project from my previous question. How to move multiple nodes in ARSCNView The previous program was a prototype. The purpose of this function is to move the AR object which is added by hand tracking using hand tracking by the…
tonywang
  • 181
  • 2
  • 13
0
votes
2 answers

How do i update TableView row only for unique values in swift?

I have a code that detects objects in real-time, however, these objects are reloaded almost every frame, causing a huge amount of data. I want to find a way to detect the objects without having to reload the frame every second. I've attached some…
tamerjar
  • 220
  • 2
  • 12
0
votes
0 answers

Xcode iOS barcode scanner in background mode using Apple Vision framework

I'm looking to see if it's possible to build iOS barcode scanner with Apples vision framework (Xcode, swift). I know how to build a barcode scanner when view controller is active, However I'm looking to see if somehow its possible to scan without a…
dweb
  • 141
  • 7
0
votes
1 answer

How can I resize my MLMultiArray to fit my camera texture size?

I am doing some semantic segmentation of people using DeepLabV3 mlmodel. The output after prediction is 513*513 as an MLMultiArray. Currently I am resizing my camera output to this size to apply the segmented array. How can I resize the MLMultiArray…
Arun
  • 222
  • 1
  • 10
0
votes
1 answer

How to know what class the image belongs to?

I am trying to play around with Vision and CoreML. Code: extension CaptureImageView { private func loadImage() { guard let inputImage = image else { return } predictImage = inputImage performImageClassification() } …
Jkrist
  • 748
  • 1
  • 6
  • 24
0
votes
0 answers

unable to retrieve MLModel from CoreML Model Deployment

I am very new to CoreMl and I want to retrieve a model from Coreml model deployment which was released this year at WWDC. I made an app that just classifies special and rare things and I uploaded that model.archive to the CoreMl Model deployment…
0
votes
1 answer

Unable to get boundingBox from result for a request VNCoreMLRequest

I'm trying to use Vision with a custom model I trained, but I don't see a way to get the bounding box where Vision detected it in the frame. The model: I've trained the model using CreateML, and it can detect 2 specific items. I tested the model in…
thedp
  • 8,350
  • 16
  • 53
  • 95
0
votes
1 answer

Timing difference when recognizing photos Swift

I created an application that recognizes the flowers in the photo. The photo can be from a gallery or taken with a camera. I have function: func detectFlower(image: CIImage,completion: @escaping (_ getString:String?,_ error:Error?,_…
newbieHere
  • 276
  • 2
  • 15
0
votes
1 answer

How to change color of detected object in real time using swift, coreml, vision?

How can I get the actual shape of the detected object, for example, to change its color? In the picture below you can see that banana is detected and was put in Rect, but how can I get the actual detected shape of it, for example, to make the…
Sasha
  • 13
  • 5
0
votes
1 answer

How can I use the iOS Vision framwork from Objective-C?

The example code on the Apple documentation for detecting still images only has Swift example code. Most tutorials seem to be in Swift and indicate to just "import Vision" in the header, but do not explain how to get the compiler to identify Vision…
D114
  • 31
  • 5