0

I created an ML model using CreateML. This model classifies images as Cat, Dog, Rabbit. Selecting an image from the photoLibrary by clicking an imageView. It is processed by the selected visual model. The result is being printed to the Label. When I test the model using CreateML it works fine. But in the application, it gives the same wrong results for every image. After starting the app, no matter what image is selected, it gives these results in order each time:

  • 77% Dog
  • 83% Cat
  • 71% Cat
  • 56% Cat ...

Please help me. Thank you for interesting.

When the application is opened for the first time:

When the application is opened for the first time:

When selecting an image from the photoLibrary:

First result

Second result

Code :


import UIKit
import Photos
import PhotosUI
import CoreML
import Vision

class ViewController: UIViewController, PHPickerViewControllerDelegate, UINavigationBarDelegate {
    
    @IBOutlet weak var imageView: UIImageView!
    
    @IBOutlet weak var resultLabel: UILabel!
    
    override func viewDidLoad() {
        super.viewDidLoad()
        
        imageView.isUserInteractionEnabled = true
        
        let gesture = UITapGestureRecognizer(target: self, action: #selector(selectImage))
        
        imageView.addGestureRecognizer(gesture)
    }
    
    @objc func selectImage(){
        
        imageView.image = UIImage()
        var configuration = PHPickerConfiguration(photoLibrary: .shared())
        
        configuration.selectionLimit = 1
        configuration.filter = PHPickerFilter.images
        let vc = PHPickerViewController(configuration: configuration)
        vc.delegate = self
        present(vc, animated: true)
    }
    
    func picker(_ picker: PHPickerViewController, didFinishPicking results: [PHPickerResult]) {
        picker.dismiss(animated: true)
        
        DispatchQueue.global().async {
            results[0].itemProvider.loadObject(ofClass: UIImage.self) { [weak self] (reading, error) in
                guard var imageSelected = reading as? UIImage, error == nil else {return}
                print("Selected image = \(imageSelected)")
                
                guard let ciimage = CIImage(image: imageSelected) else {fatalError("Problem while converting to CIImage")}
                
                self?.detectImage(image: ciimage)
                
                
                DispatchQueue.main.async {
                    self?.imageView.image = imageSelected
                }
            }
        }
    }
    
    func detectImage(image : CIImage){
        
        let config = MLModelConfiguration()
        
        guard let model  = try? VNCoreMLModel(for: DogCatRabbitMLTry_1.init(configuration: config).model) else {fatalError("Loading CoreML Model Failed")}
        
        let request = VNCoreMLRequest(model: model) { (request, error) in
            guard let results = request.results as? [VNClassificationObservation] else {
                    fatalError("Model failed to process image")
            }
            if let firstResult = results.first{
                print(firstResult.identifier)
                
                DispatchQueue.main.async {
                    self.resultLabel.text = "%\(Int(firstResult.confidence*100)) \(firstResult.identifier)"
                }
            }
        }
        
        let handler = VNImageRequestHandler(ciImage: image)
        
        do{
            try handler.perform([request])
        } catch{
            fatalError("ciimage was not handling")
        }
    }
    
    /*
    func resizeImage(image: UIImage, targetSize: CGSize) -> UIImage? {
        let size = image.size
        
        let widthRatio  = targetSize.width  / size.width
        let heightRatio = targetSize.height / size.height
        
        // Figure out what our orientation is, and use that to form the rectangle
        var newSize: CGSize
        if(widthRatio > heightRatio) {
            newSize = CGSize(width: size.width * heightRatio, height: size.height * heightRatio)
        } else {
            newSize = CGSize(width: size.width * widthRatio, height: size.height * widthRatio)
        }
        
        // This is the rect that we've calculated out and this is what is actually used below
        let rect = CGRect(origin: .zero, size: newSize)
        
        // Actually do the resizing to the rect using the ImageContext stuff
        UIGraphicsBeginImageContextWithOptions(newSize, false, 1.0)
        image.draw(in: rect)
        let newImage = UIGraphicsGetImageFromCurrentImageContext()
        UIGraphicsEndImageContext()
        
        return newImage
    }
     */
}


I tried these :

  • Do operations in different threads
  • Resize selected Image
  • Remake the model
  • Edit PHPicker configurations

0 Answers0