This might be more of a generic graphics programming question, but for now this is within the context of using Apple's Metal framework on macOS.
In NSView mouseDown
, it's trivial to get the local coordinates of where the mouse down event took place by simply calling:
NSPoint localPoint = [self convertPoint:event.locationInWindow fromView:nil];
Given that local point, what are the steps required to determine where the mouse down occurred within the context of a rendered scene?
For now, I'm simply rendering a 2D plane in an MTKView
. The 2D plane can be scaled, translated and rotated on the z-axis. I can somewhat brute-force the solution because the scene is so simple, but I'm wondering what the more correct approach is.
It feels as if I would have to duplicate some of the vertex shader logic in my Objective-C code to ensure that all the transforms are correctly applied. But I'm not quite sure how that world work when rotation is applied.
Very few of the Metal tutorials or references talk much about mouse input and how the coordinate systems interact. Any insight would be appreciated.
In this example, if the user clicked on the orange plane, how do you determine normalized coordinates within that specific object? (In this example, it might be something like [0.8, 0.9])