I have searched all over for examples of how to mimic the newer iOS Photos app capability to lift an object from it's background in my own SwiftUI project, but haven't found anything better than this examples:
https://betterprogramming.pub/coreml-image-segmentation-background-remove-ca11e6f6a083
which does a decent job, but the results don't compare to the quality of segmentation pulled off from within the Photos app (starting with iOS 16, I believe). Presumably all the functionality is available in XCode via the various Core ML models, but I can't find any examples of how to pull of similarly clean image segmentation.
When implementing the code found in this example (https://betterprogramming.pub/coreml-image-segmentation-background-remove-ca11e6f6a083 ), which uses the DeeplabV3 image segmentation functions, i get the following results:
Here is what the Photos App is able to accomplish for comparison:
I'm very new to Swift programming, so maybe I'm being a bit naive regarding accessibility to default iOS app methods. Hoping someone can point me in the right direction.