I'm trying to use the Vision framework on iOS. In particular, I'm using VNGenerateOpticalFlowRequest as shown in WWDC 2020. I have the following Swift code in my app, which I'm running on an iPhone 14 in Simulator (with XCode 14.3.1):
// Flow.swift
import CoreImage
import Foundation
import Vision
func flow() throws {
let image1 = CIImage(color: .blue).cropped(to: .init(x: 0, y: 0, width: 100, height: 100))
let image2 = CIImage(color: .blue).cropped(to: .init(x: 0, y: 0, width: 100, height: 100))
let visionRequest = VNGenerateOpticalFlowRequest(targetedCIImage: image1)
let requestHandler = VNSequenceRequestHandler()
try requestHandler.perform([visionRequest], on: image2)
}
// ContentView.swift
import SwiftUI
struct ContentView: View {
var body: some View {
Button("Calculate flow") {
try! flow()
}
}
}
When I click on the button in ContentView, I expected that the optical flow between the two images would be calculated. Instead, it fails with the fatal error:
2023-07-16 12:51:45.669412-0700 optical-flow-test[5994:125790] [espresso] [Espresso::handle_ex_plan] exception=Espresso exception: "I/O error": Missing weights path cnn_moflow.espresso.weights status=-2
optical_flow_test/ContentView.swift:8: Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=com.apple.vis Code=9 "Failed to create motion flow estimator" UserInfo={NSLocalizedDescription=Failed to create motion flow estimator}
This does not seem to occur when running the app on a real device.
(I also tried running the code for Building a feature-rich app for sports analysis with the provided sample video on Simulator, and it appears to be stuck on "Loading Board".)
Why is this error produced, and how can I correctly create the optical flow estimator to work on the Simulator?