0

I am trying to run Stable Diffusion's CoreML model with a SwiftUI app, on an M2 pro mac, with MacOS Ventura 13.2.1. I have downloaded the Core ML Stable Diffusion Models from Hugging Face Hub, and the app compiled and ran successfully on my mac bug free.

**The Core ML Stable Diffusion files in my project : ** Image of the CoreML Stable Diffusion files

I downloaded those coreML by cloning that repo git clone https://huggingface.co/apple/coreml-stable-diffusion-v1-4

(https://i.stack.imgur.com/5li9m.png)

However, when I click the generate button, which triggers the generateImage function, I got the error :

Thread 11: Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=com.apple.CoreML Code=0 "Unable to load model: file:///Users/landon/Library/Developer/Xcode/DerivedData/ArtSenseiPro-abnumyedexjximglshwxziyuddrp/Build/Products/Debug/ArtSenseiPro.app/Contents/Resources/TextEncoder.mlmodelc/. Compile the model with Xcode or `MLModel.compileModel(at:)`. " UserInfo={NSLocalizedDescription=Unable to load model: file:///Users/landon/Library/Developer/Xcode/DerivedData/ArtSenseiPro-abnumyedexjximgls

Here is my entire code :

import SwiftUI
import Combine
import StableDiffusion

@available(macOS 13.1, *)
@available(iOS 16.2, *)
struct ContentView: View {
    
    @State var prompt: String = "Penguins in business suits debating whether to invest in an ice cream stand on the beach."
    @State var pipeline: StableDiffusionPipeline?
    @State var image: CGImage?
    @State var progress = 0.0
    @State var generating = false
    @State var initializing = true
    
    var body: some View {
        VStack {
            if initializing {
                Text("Initializing...")
            } else {
                if let image = self.image {
                    Image(image, scale: 1.0, label: Text(""))
                }
                if generating {
                    Spacer()
                    ProgressView(value: progress)
                    Text("generating (\(Int(progress*100)) %)")
                } else {
                    Spacer()
                    TextField("Prompt", text: $prompt)
                    Button("Generate") {
                        generateImage()
                    }
                }
            }
        }
        .padding()
        .task {
            guard let resourceURL = Bundle.main.resourceURL else {
                return
            }
            do {
                pipeline = try StableDiffusionPipeline(resourcesAt: resourceURL)
            } catch let error {
                print(error.localizedDescription)
            }
            initializing = false
        }
    }
    
    func generateImage(){
        progress = 0.0
        image = nil
        generating = true
        Task.detached(priority: .high) {
            var images: [CGImage?]?
            do {
                images = try pipeline?.generateImages(prompt: prompt, disableSafety: false, progressHandler: { progress in
                    self.progress = Double(progress.step) / 50
                    if let image = progress.currentImages.first {
                        self.image = image
                    }
                    return true
                })
            } catch let error {
                print(error.localizedDescription)
            }
            if let image = images?.first {
                self.image = image
            }
            generating = false
        }
    }
}

Any help would be appreciated. Thanks!

I am trying to run Stable Diffusion's CoreML model with a SwiftUI app, on an M2 pro mac, with MacOS Ventura 13.2.1. I have downloaded the Core ML Stable Diffusion Models from Hugging Face Hub, and the app compiled and ran successfully on my mac bug free.

However, when I click the generate button, which triggers the generateImage function, I got the error :

Thread 11: Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=com.apple.CoreML Code=0 "Unable to load model: file:///Users/landon/Library/Developer/Xcode/DerivedData/ArtSenseiPro-abnumyedexjximglshwxziyuddrp/Build/Products/Debug/ArtSenseiPro.app/Contents/Resources/TextEncoder.mlmodelc/. Compile the model with Xcode or `MLModel.compileModel(at:)`. " UserInfo={NSLocalizedDescription=Unable to load model: file:///Users/landon/Library/Developer/Xcode/DerivedData/ArtSenseiPro-abnumyedexjximgls

I tried reinstalling the files related to the model many times but didn't work.

  • May or may not be related but `Task` is not compatible with `closures` without a `CheckedContinuation` that converts the closure to the new `Concurrency` that `Task` is likely dead long before `generateImages` does anything, you can just remove it and use `onAppear` instead. Watch "Meet async/await" – lorem ipsum Mar 26 '23 at 09:54
  • [Here](https://stackoverflow.com/questions/75273987/wrong-offsets-when-displaying-multiple-vnrecognizedobjectobservation-boundingbox/75316939#75316939) is a sample with another ML model on how you would do the conversion and actually `await` the result. – lorem ipsum Mar 26 '23 at 09:58

0 Answers0