0

Imagine you have a chessboard textured triangle shown in front of you.

Then imagine you move the camera so that you can see the triangle from one side, when it nearly looks as a line.

You will provably see the line as grey, because this is the average color of the texels shown in a straight line from the camera to the end of the triangle. The GPU does this all the time.

Now, how is this implemented? Should I sample every texel in a straight line and average the result to get the same output? Or is there another more efficient way to do this? Maybe using mipmaps?

Inuart
  • 1,432
  • 3
  • 17
  • 28

1 Answers1

1

It does not matter if you look at the object from the side, front, or back; the implementation remains exactly the same.

The exact implementation depends on the required results. A typical graphics API such as Direct3D has many different texture sample techniques, which all have different properties. Have a look at the documentation for some common sampling techniques and an explanation.

If you start looking at objects from an oblique angle, the texture on the triangle might look distorted with most sampling techniques, and Anisotropic Filtering is often used in these scenario's.

Miklas
  • 126
  • 2
  • 5