1

I have a WebGL 2 app that renders a bunch of point lights w/a deferred pipeline. I would like to port this to Aframe for use with Oculus Rift S.

My questions relate only to rendering. Now I know next to nothing about VR specific rendering; other than the fact that two images are rendered for each eye and then passed through some distortion filters. I see that there exist components that (were last updated quite a while ago) provide this functionality. My pipeline is written with a low level WebGL lib and I do not want to port it to some other component (performance, compatibility reasons + my own vanity).

I would also like to avoid as much direct integration of this pipeline with three.js as possible. Right now I have a basic three.js scene with a full screen quad textured with the output of my deferred renderer. I assume leaving this as-is and shoving this scene into Aframe wouldn't render properly on a Rift, so how would I go about rendering two full-screen quads for each eye in Aframe? Are the camera frustums and views for each eye easily exposed in Aframe? Is my thinking way off entirely?

Thanks for any help, I've looked through the aframe git for some time now and cannot find any clear place to start.

Will Snyder
  • 51
  • 1
  • 3

0 Answers0