I think that I don't quite get the Unity rendering engine.
I use RenderTexture to generate the screenshot (I need to manage it later on):
screenshotRenderTexture = new RenderTexture(screenshot.width, screenshot.height, depthBufferBits, RenderTextureFormat.Default);
screenshotRenderTexture.Create();
RenderTexture currentRenderTexture = RenderTexture.active;
RenderTexture.active = screenshotRenderTexture;
Camera[] cams = Camera.allCameras;
System.Array.Sort(
cams,
delegate(Camera cam1, Camera cam2)
{
// It's easier than write float to int conversion that won't floor
// depth deltas under 1 to zero and will correctly work with NaNs
if (cam1.depth < cam2.depth)
return -1;
else if (cam1.depth > cam2.depth)
return 1;
else return 0;
}
);
foreach(Camera cam in cams)
{
cam.targetTexture = screenshotRenderTexture;
cam.Render();
cam.targetTexture = null;
}
screenshot.ReadPixels(new Rect(0, 0, textureWidth, textureHeight), 0, 0);
screenshot.Apply();
RenderTexture.active = currentRenderTexture;
However, if depthBufferBits is 0, the render results with all kind of z-buffer bugs (things rendered in wrong order).
I understand what is depth buffer in a general sense. However, what I don't understand is — if RenderTexture is used to combine render results of individual cameras, why is depth buffer needed in it? How do this abstractions work, exactly — does camera create image on it's own and then gives it to RenderTexture, or does camera use RenderTexture's depth buffer? It seems like it's the latter, because of bugs I experience (the things in wrong order are all rendered with the same camera, so the issues is with ordering stuff inside one camera, not ordering the stuff between different cameras), but at the same time it kind of contradicts the common sense of how these abstractions are structured on C# level.
And, finally — can I somehow use the default depth buffer that is used for normal rendering on this one? Because 16 bits per pixel on mobile device is pretty painful.
Update:
Here's what I attempted to do:
screenshotRenderTexture = new RenderTexture(
screenshot.width,
screenshot.height,
0,
RenderTextureFormat.Default
);
screenshotRenderTexture.Create();
RenderBuffer currentColorBuffer = Graphics.activeColorBuffer;
Graphics.SetRenderTarget(screenshotRenderTexture.colorBuffer, Graphics.activeDepthBuffer);
yield return new WaitForEndOfFrame();
Graphics.SetRenderTarget(currentColorBuffer, Graphics.activeDepthBuffer);
And here's what I got:
SetRenderTarget can only mix color & depth buffers from RenderTextures. You're trying to set depth buffer from the screen.
UnityEngine.Graphics:SetRenderTarget(RenderBuffer, RenderBuffer)
<ScreenshotTaking>c__Iterator21:MoveNext() (at Assets/Scripts/Managers/ScreenshotManager.cs:126)
Why can't it mix depth buffer from the screen and color buffer from RenderTexture?