I am attempting a simple approach to postprocessing in A-Frame (without using the three.js classes EffectComposer, etc., for simplicity). The approach seems standard:
- create a new render target
- render the scene into the target's texture
- create a secondary scene containing a single quad, with a custom shader material that alters that texture in some way
- use an orthographic camera to render the secondary scene into the main window
I have set this up with an A-Frame component as follows (with the goal of working in VR, as seen by the code in the tick function):
AFRAME.registerComponent("color-shift", {
init: function ()
{
// render the scene to this texture
this.renderTarget0 = new THREE.WebGLRenderTarget(1024, 1024);
this.renderTarget0.texture.magFilter = THREE.NearestFilter;
this.renderTarget0.texture.minFilter = THREE.NearestFilter;
this.renderTarget0.texture.generateMipmaps = false;
let texture = this.renderTarget0.texture;
let postMaterial = new THREE.ShaderMaterial( {
uniforms: {
tex: {type: "t", value: texture},
},
vertexShader: `
varying vec2 vUv;
void main()
{
vUv = uv;
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
`,
fragmentShader: `
varying vec2 vUv;
uniform sampler2D tex;
void main()
{
vec4 color = texture2D(tex, vUv);
gl_FragColor = vec4(color.g, color.b, color.r, 1);
}
`
});
// separate scene #1 for texture post processing
const quad = new THREE.Mesh(
new THREE.PlaneGeometry(2, 2), postMaterial );
this.rtScene = new THREE.Scene();
this.rtScene.add(quad);
this.rtCamera = new THREE.OrthographicCamera();
this.rtCamera.position.z = 0.5;
},
tick: function(t, dt)
{
// store XR settings
const renderer = this.el.sceneEl.renderer;
const currentRenderTarget = renderer.getRenderTarget();
const currentXrEnabled = renderer.xr.enabled;
const currentShadowAutoUpdate = renderer.shadowMap.autoUpdate;
// temporarily disable XR
renderer.xr.enabled = false;
renderer.shadowMap.autoUpdate = false;
// apply post-processing effects to previously rendered target texture,
// displayed on a quad, rendered to screen
renderer.setRenderTarget(null);
renderer.render(this.rtScene, this.rtCamera);
// re-enable XR
renderer.xr.enabled = currentXrEnabled;
renderer.shadowMap.autoUpdate = currentShadowAutoUpdate;
// render scene onto a texture the next time it renders
renderer.setRenderTarget(this.renderTarget0);
}
});
The complete source code is at: https://github.com/stemkoski/A-Frame-Examples/blob/master/post-processing-test.html and a live version is at https://stemkoski.github.io/A-Frame-Examples/post-processing-test.html.
This example works perfectly as expected on desktop, resulting in a hue shift, but when entering VR mode, the screen is completely black.
I am not sure why this happens; the details about how rendering in VR mode works are a little confusing to me. I think that in VR mode, the camera is actually an array of 2 perspective cameras. I thought that the renderer's render method rendered the scene twice, once from each of these cameras, into a viewport that corresponds to half of a rendertarget texture, but I may very well be mistaken. I would like to capture the results of the render while in VR mode and then apply simple postprocessing to it, like in the shader above. How can I fix the code above to accomplish this?