Premise
I'm currently developing a graphical application and, due to unforeseen limitations with the framework I'm using, I need to convert my TextureCube textures into a Texture2DArray with 6 slices.
While going from one format to the other is not really an issue, sampling from the Texture2DArray using a 3D vector constitutes a harder challenge.
Question
Given:
- A 3D vector previously used to sample a TextureCube
- A Texture2DArray representation of the same TextureCube
What is the most efficient way to write a (HLSL) shader function that returns a float3 coord, in which:
- coord.z is the index of the correct slice within the Texture2DArray
- coord.xy are the uv coordinates used to sample the selected slice
Current Progress
At the moment this is my current progress:
- I know from this link, that the vector component with the biggest magnitude dictates the correct face to be selected.
- Once a face is selected using the component with the biggest magnitude, I think that the remaining components can be used as uv coordinates (assuming the vector is normalized).
- I'm not sure how to treat cases in which two of the vector components, or all three of the vector components have the same magnitude (for instance (0.5,0.5,0.5)). I assume that, if the texture is somewhat continuous from pixel to pixel, picking one face, rather then the next should not yield particular differences.
Final Note
Please assume that the Cubemap faces are sorted within the using the default DX11 ordering, i.e. +X, -X, +Y, -Y, +Z, -Z. I also want to stress that efficiency is extremely important.