There's nothing special about UIResponder
-- it's indeed possible to do what you suggest. (Though maybe not as easy as you think.)
In effect, UIResponder
is an abstract superclass. It doesn't (AFAIK) implement event dispatching itself; rather, it defines an interface for classes that handle events. How events get handled is up to the subclass implementation. UIApplication
, UIWindow
, and UIView
implement responder methods to pass touch events along to the view under the touch -- presumably, UIView
calls its own hitTest:withEvent:
method to find the subview under the touch and forward the event to it. Sprite Kit extends the responder pattern into its node-tree model, so SKView
likely implements UIResponder
methods by calling its scene's convertPointFromView:
and nodeAtPoint:
methods to find the node under a touch before passing the event to that node.
If you implement your own UIView
(or GLKView
or whatever kind of view) subclass, you can extend the responder pattern to include your custom objects as well. If you're drawing a 3D scene with OpenGL ES and have Objective-C classes for representing the elements of that scene, feel free to have those classes inherit from UIResponder
.
Just doing that won't magically forward touch events to those objects, though -- just like UIView
does hit testing to forward events to subviews and SKView
does hit testing to forward events to nodes, your view will need to do hit testing to determine which of your custom objects should receive a touch event (and then call the appropriate responder method on that object).
How to do hit testing in your OpenGL ES view is up to you... you'll have to find a method that works well for the needs of your app. Hit testing (aka selection or picking) is one of the big problem areas in 3D programming, so there's a lot written about it -- do some searching and you'll find plenty of recommendations. Here's a quick overview of the two most common approaches:
Re-render your scene to an offscreen framebuffer using a special fragment shader (or reconfigured GLKBaseEffect
) that outputs object IDs instead of pixel colors, then use glReadPixels
to find the pixel under the touch. (Multiple render targets in OpenGL ES 3.0 might help with performance.) Reading from GPU memory using glReadPixels
is slow, so read only the point under the touch (or a small area around it if you want to allows some error margin) to keep performance up.
If you know where the elements of your scene are in its 3D coordinate space , you can use the GLKMathUnproject
function to convert the touch location -- a point in 2D screen space -- to a line in your scene's 3D world space, then see where that line intersects your scene geometry (or the bounding boxes for your scene geometry) to figure out which object got touched. (This approach is called ray casting.)
If you extend the responder pattern into your own custom object architecture, you'll want to make sure your classes forward events up the responder chain appropriately. Read Event Handling Guide for iOS for details.