0

I have searched all over for examples of how to mimic the newer iOS Photos app capability to lift an object from it's background in my own SwiftUI project, but haven't found anything better than this examples:

https://betterprogramming.pub/coreml-image-segmentation-background-remove-ca11e6f6a083

which does a decent job, but the results don't compare to the quality of segmentation pulled off from within the Photos app (starting with iOS 16, I believe). Presumably all the functionality is available in XCode via the various Core ML models, but I can't find any examples of how to pull of similarly clean image segmentation.

When implementing the code found in this example (https://betterprogramming.pub/coreml-image-segmentation-background-remove-ca11e6f6a083 ), which uses the DeeplabV3 image segmentation functions, i get the following results:

DeeplabV3 segmentation

Here is what the Photos App is able to accomplish for comparison:

Photos App segmentation

I'm very new to Swift programming, so maybe I'm being a bit naive regarding accessibility to default iOS app methods. Hoping someone can point me in the right direction.

  • I haven't looked but that feature is too new. Apple doesn't always provide access to what it provides and more often than not we don't get access to such features pre-built for a few years. – lorem ipsum Mar 23 '23 at 12:44

0 Answers0