1

TL;DR: How do you encode and decode an MTLSharedTextureHandle and MTLSharedEventHandler such that it can be transported across an XPC connection inside an xpc_dictionary?


A macOS application I'm working on makes extensive use of XPC services and was implemented using the C-based API. (i.e.: xpc_main, xpc_connection, xpc_dictionary...) This made sense at the time because certain objects, like IOSurfaces, did not support NSCoding/NSSecureCoding and had to be passed using IOSurfaceCreateXPCObject.

In macOS 10.14, Apple introduced new classes for sharing Metal textures and events between processes: MTLSharedTextureHandle and MTLSharedEventHandle. These classes support NSSecureCoding but they don't appear to have a counter-part in the C-XPC interface for encoding/decoding them.

I thought I could use something like [NSKeyedArchiver archivedDataWithRootObject:requiringSecureCoding:error] to just convert them to NSData objects, which can then be stored in an xpc_dictionary, but when I try and do that, I get the following exception:

Caught exception during archival: 
This object may only be encoded by an NSXPCCoder.

(NSXPCCoder is a private class.)

This happens for both MTLSharedTextureHandle and MTLSharedEventHandle. I could switch over to using the new NSXPCConnection API but I've already got an extensive amount of code built on the C-interface, so I'd rather not have to make the switch.

Is there any way to archive either of those two classes into a payload that can be stored in an xpc_dictionary for transfer between the service and the client?

kennyc
  • 5,490
  • 5
  • 34
  • 57
  • What happens if you just try treating the shared handle object as an XPC object? For example, storing it in an XPC dictionary or the like? – Ken Thomases Nov 13 '18 at 17:44
  • If I cast an MTLSharedTextureHandle to an `xpc_object_t` and then call `xpc_dictionary_set_value`, it just crashes when `xpc_connection_send_message` is called. The stack trace ends at `_xpc_dictionary_serialize_apply` with an EXC_BAD_ACCESS (code=EXC_I386_GPFLT). – kennyc Nov 13 '18 at 17:54

1 Answers1

2

MTLSharedTextureHandle only works with NSXPCConnection. If you're creating the texture from an IOSurface you can share the surface instead which is effectively the same thing. Make sure you are using the same GPU (same id<MTLDevice>) in both processes.

There is no workaround for MTLSharedEventHandle using public API.

I recommend switching to NSXPCConnection if you can. Unfortunately there isn't a good story for partially changing over using public API, you'll have to do it all at once or split your XPC service into two separate services.

russbishop
  • 16,587
  • 7
  • 61
  • 74
  • Thanks for the insight Russ. If a shared texture is a wrapper around an IOSurface, do I gain anything by rendering to a shared texture in the service and then blit'ing from that texture in the application versus rendering to an IOSurface in the service, sending that to the app, wrapping it in a new texture and then blit'ing from that texture? The only extra step I see is having to wrap the IOSurface with a texture to use a blit encoder on the app side, but that seems to be pretty quick. (I'm using Core Image in the service to render images that will get copied to a texture in the app.) – kennyc Nov 26 '18 at 11:12
  • 1
    It shouldn't make much difference; the key is to get it into an IOSurface. If you're rendering for on-screen display somewhere else besides `CAMetalLayer` then I suggest setting a `CALayer.contents` property to the `IOSurface`, then call `setContentsChanged` whenever the surface is modified. Apply transforms via the layer if needed for on-screen display. That avoids any blitting; the pixels can stay on the GPU the entire time. (This depends on exactly what you're doing though) – russbishop Nov 26 '18 at 21:45
  • I'm building a custom "Image View" to display images using `MTKView`. I'm using an XPC service to decode the images into a `CIImage`, then using a `CIContent` to render into an `IOSurface`. The surface is then sent to the application where it's wrapped in an `MTLTexture` so that I can blit its contents into a "master" texture. The surface is then recycled. I'm trying to understand what the most efficient way is for me to get the rendered image data from the XPC service and into a texture on the app side, where I can then read from it in my shader. – kennyc Nov 26 '18 at 22:15
  • 1
    I would push all of that into the XPC service. The only thing the app needs is a reference to the surface it needs to put on screen. If you really need to blit then do that in the XPC service. One initial message to get the master surface, then a message each time the XPC service updates the content so the app can poke the layer. – russbishop Dec 01 '18 at 05:27
  • I had played around with putting everything in the service but I could never quite figure out how to properly do all the synchronization. If I'm rendering to a surface in the service, then I eventually need to get that surface over to the app and wrap it in a texture so that I can sample from it in my fragment shader. If I immediately recycle that surface back to the service, then on my next render pass I might not have any data for the fragment shader to sample from. That's why I blit into a "master" texture so that the service always has something to sample from. – kennyc Dec 01 '18 at 09:17
  • @kennyc Oh I meant move all metal rendering into the XPC service, including your fragment shader. That may or may not be practical, it depends on what your app is doing. – russbishop Dec 13 '18 at 23:16