I am trying to write a shader to read the whole frame, pixel by pixel and after some calculations re-write the pixels. I have looked through some codes but most of them were not relevant. Could you give me some hints on how I can read pixels and write pixels in Unity shader programming?
-
somewhat related .. http://stackoverflow.com/questions/34952184/pixel-perfect-shader-in-unity – Fattie Jan 22 '16 at 17:05
3 Answers
If you have the Pro version of Unity, you can achieve this with image (postprocessing) effects. All you have to do is to implement the OnRenderImage
callback on a component of a camera. Then you call Graphics.Blit
with a material which has a shader. The shader receives the screen contents as main texture.

- 32,319
- 7
- 89
- 113
-
Thank you. Before leaving this question here I read that article. I already wrote a code to warp my image and I converted it to a C# library (.dll) to add to my code, but I realized that this method may not work well! since for every single pixel, shader has to call external library (assuming it is possible), so I finally ended up with writing pure CG code. As I mentioned under neath of String's answer, I can calculate what I do need in other languages but I cannot implement it as a Shader. I am writing the shader and I would like to share my code here as a new brand Unity programmer here. :) – user1021110 Sep 27 '13 at 16:51
You need texture buffers the size of your frame. Then you want to render your frame into one of the buffers. Now you need to write a fragment shader that reads one buffers, and writes to the other. Then finally you draw the fragment shader output as a flat object that covers the screen.
In shader programming, you do not work pixel by pixel, you define a function that will be used on a single pixel at a position which is a float from 0 to 1 in all 3 axis (although you will only be using 2). That fragment shader is then run for lots of pixel all in parallel, that's how it does everything more quickly.
I hope that brief explanation is enough to get you started. Unity fragment shaders are written in Cg. Cg is a language which is half way between OpenGL's language GLSL and DirectX language HLSL, as all the high level languages compile into native instructions on the graphics card they are all fairly similar. So there are plenty of Cg samples about, and once you can write Cg, you will have no problem reading HLSL and GLSL.

- 1,674
- 10
- 16
-
It was very helpful. I just started to read some samples based on your advice and I will upload my code here if there was an error. According to your description, I am convert my calculation. Once I read a pixel (Color and the position is included) then I want to do some calculations, but the main issue here is that, not only I want to change the color, but also I want the position of this pixel because I am working on image warping. I am pretty familiar with the mathematics concept of my work, but the implementation is an issue for me. – user1021110 Sep 27 '13 at 16:39
-
To do that you need 2 different shader programs! The first warps a flat grid (that you make so you can decide how much resolution it has) by moving the vertices in a vertex shader, and then you change the colours with a pixel fragment shader. – Strings Sep 30 '13 at 20:24
-
It is very hard to transfer standard code that moves and colours pixels in a single phase on a GPU. My pipeline answer might give you some help understanding why http://stackoverflow.com/a/18931771/2770858 – Strings Sep 30 '13 at 20:26
Thank you for your advice. they were really helpful. I finally ended up with this code for my shader. And now, a new problem just comes up.
My solution: To solve my keystone problem, I have adapted the "wearing a glass" idea! it means that I have placed on a plane in front of camera and attached the below shader on it. Then I attached the plane to the camera. The problem right now is that is shader works very well but in my VR setting it does not work because I have several cameras and the scene is distorted in one of them (as I want) but other cameras have a normal scenes. Everything is fine until these two scenes have intersection. In that case I have a disjoint scene (please forgive me if is not a correct word). By the way, I thought that instead of using this shader for a "plane infront of camera" I have to apply it on the camera itself. my shader does not work when I add it to the camera although it works perfectly with the plane object. Could you let me know how can I modify this code to be compatible with camera? I am more than welcome to hear your suggestion and ideas besides of my solution.
Shader "Custom/she1" {
Properties {
top("Top", Range(0,2)) = 1
bottom("Bottom", Range(0,2)) = 1
}
SubShader {
// Draw ourselves after all opaque geometry
Tags { "Queue" = "Transparent" }
// Grab the screen behind the object into _GrabTexture
GrabPass { }
// Render the object with the texture generated above
Pass {
CGPROGRAM
#pragma debug
#pragma vertex vert
#pragma fragment frag
#pragma target 3.0
sampler2D _GrabTexture : register(s0);
float top;
float bottom;
struct data {
float4 vertex : POSITION;
float3 normal : NORMAL;
};
struct v2f {
float4 position : POSITION;
float4 screenPos : TEXCOORD0;
};
v2f vert(data i){
v2f o;
o.position = mul(UNITY_MATRIX_MVP, i.vertex);
o.screenPos = o.position;
return o;
}
half4 frag( v2f i ) : COLOR
{
float2 screenPos = i.screenPos.xy / i.screenPos.w;
float _half = (top + bottom) * 0.5;
float _diff = (bottom - top) * 0.5;
screenPos.x = screenPos.x * (_half + _diff * screenPos.y);
screenPos.x = (screenPos.x + 1) * 0.5;
screenPos.y = 1-(screenPos.y + 1) * 0.5 ;
half4 sum = half4(0.0h,0.0h,0.0h,0.0h);
sum = tex2D( _GrabTexture, screenPos);
return sum;
}
ENDCG
}
}
Fallback Off
}

- 213
- 1
- 6
- 13