0

I've mocked up what I am trying to accomplish in the image below - trying to pinch the pixels in towards the center of an AR marker so when I overlay AR content the AR marker is less noticeable.

I am looking for some examples or tutorials that I can reference to start to learn how to create a shader to distort the texture but I am coming up with nothing.

What's the best way to accomplish this?

enter image description here

nwales
  • 3,521
  • 2
  • 25
  • 47
  • This can be done (I've got a shader around...somewhere), but I'm not sure that its going to make the marker "less noticeable." – Draco18s no longer trusts SE Nov 25 '19 at 17:28
  • thank you. I agree. This is actually an oversimplification of exactly what I am doing, but understanding how the above works will help me apply this to other use cases. – nwales Nov 25 '19 at 19:03

1 Answers1

1

This can be achieved using GrabPass.

From the manual:

GrabPass is a special pass type - it grabs the contents of the screen where the object is about to be drawn into a texture. This texture can be used in subsequent passes to do advanced image based effects.

The way distortion effects work is basically that you render the contents of the GrabPass texture on top of your mesh, except with its UVs distorted. A common way of doing this (for effects such as heat distortion or shockwaves) is to render a billboarded plane with a normal map on it, where the normal map controls how much the UVs for the background sample are distorted. This works by transforming the normal from world space to screen space, multiplying it with a strength value, and applying it to the UV. There is a good example of such a shader here. You can also technically use any mesh and use its vertex normal for the displacement in a similar way.

Apart from normal mapped planes, another way of achieving this effect would be to pass in the screen-space position of the tracker into the shader using Shader.SetGlobalVector. Then, inside your shader, you can calculate the vector between your fragment and the object and use that to offset the UV, possibly using some remap function (like squaring the distance). For instance, you can use float2 uv_displace = normalize(delta) * saturate(1 - length(delta)).

If you want to control exactly how and when this effect is applied, make it so that has ZTest and ZWrite set to Off, and then set the render queue to be after the background but before your tracker.

For AR apps, it is likely possible to avoid the preformance overhead from using GrabPass by using the camera background texture instead of a GrabPass texture. You can try looking inside your camera background script to see how it passes over the camera texture to the shader and try to replicate that.

Here are two videos demonstrating how GrabPass works:

https://www.youtube.com/watch?v=OgsdGhY-TWM

https://www.youtube.com/watch?v=aX7wIp-r48c

Kalle Halvarsson
  • 1,240
  • 1
  • 7
  • 15
  • Thank you this is very helpful. I'm stuck on 'calculate the vector between your fragment and the object' as I can't figure out how or where to use my _PinchPointVector set in SetGlobalVector("_PinchPoint",randomGameObject.transform.position) in the frag function. – nwales Dec 03 '19 at 20:27
  • Shader.SetGlobalVector is used in a C# script in Unity which you can attach to your tracker.. You can use ``Camera.WorldToScreenPoint(transform.position)`` to convert the tracker's position to screen space. You need to define ``float4 _PinchPoint`` inside your shader, and use ComputeScreenPos(o.vertex), where o.vertex is the clip-space vertex position, to calculate the screen position. Subtract the positions to get your displacement vector. – Kalle Halvarsson Dec 03 '19 at 21:35
  • By "subtract the positions", i mean subtract _PinchPoint from i.screenPos (which you can pass from the vertex shader). You then get a vector pointing from the fragment position to the pinch point. The X and Y coordinates of the resulting vector are the delta coordinates relative to the screen, while Z is the difference in depth. Use something like the formula i described in the answer to convert this delta into a UV offset. – Kalle Halvarsson Dec 04 '19 at 13:35
  • Thank you. This is working for me - https://gist.github.com/natwales/394c8218500d2891f2509384f018ab10 – nwales Dec 04 '19 at 20:33
  • my uv offset function is a bit different than purposed above, but It does seem to work to pinch or inflate uvs around a fixed point. Is there a better way? Next step is to get this to work against more complex shapes vs collapsing into a single point... – nwales Dec 04 '19 at 20:37
  • Great! Yes, it is possible to do that, have a look at the github project i linked to in my answer. Basically, let's say that you have a mesh with a normal map - then you can transform the normal from tangent space into world space, and then from world space into view space. This vector is now your new displacement vector. In your case, you can invert the normal to get a collapsing effect. For this, you won't need to set any variables from script. It's important to understand though, that any normals pointing directly at the screen will not contribute to displacement. – Kalle Halvarsson Dec 05 '19 at 09:16