This is a followup to Matt's previous question about camera orientation. I'm working with him on a javascript interface for a python analysis code for 3D hydro simulations.
We've successfully used xtk to build a 3D model of the mesh structure in our simulation. The resulting demo looks a lot like the simple cube demo on the xtk website so your advice based on that demo should be readily portable to our use case.
We were able to infer the view matrix at runtime from the XTK camera object. After a lot of poking and some trial and error, we figured out that the view matrix is really (in openGL nomenclature) the model-view matrix - it combines the camera's view and translation with the orientation and translation of the model the camera is looking at.
We are trying to infer the orientation of the camera relative to the (from our point of view) fixed model as we click, drag, and zoom with respect to the model. In the end, we'd like to save a set of keyframes from which we can generate a camera path that will eventually be exported to python to make a 3D volume rendering movie of the simulation data. We've tried a bunch of things but have been unable to invert the model-view matrix to infer the camera's orientation with respect to the model.
Do you have any insight into how this might be done? Is our inference about the view matrix correct or is it actually tracking something different from what I described above?
From our point of view it would be really great if xtk kept track of the camera's up, look, and position vectors with respect to the model so that we could just query for and use the values directly.
Thanks very much for your help with this and for making your visualization toolkit freely available.