6

I'm observing a strange phenomenon with my OpenGL-program which is written in C#/OpenTK/core-profile. When displaying mandelbrot-data from a heightmap with ~1M vertices the performance differs dependant on the scale-value of my view-matrices (it's orthographic so yes I need scale). The data is rendered using VBO's. The render-process includes lighting and shadow-maps.

My only guess is that something in the shader "errors" on low scale values and there is some error handling. Any hints for me?

Examples:

Example 1 Example 2

Cœur
  • 37,241
  • 25
  • 195
  • 267
freakinpenguin
  • 831
  • 14
  • 24
  • Maybe you should post this question in the [Game Development](http://gamedev.stackexchange.com/) site for the very best responses. I'm not saying there aren't passionate programmers on this site which could help you, just that maybe you'd have a better chance for success. For what it's worth: I remember something like: it is best to keep all of your world objects within a [2 x 2 x 2] box. That is, the lowest coordinate on any dimension should be -1 and the highest should be 1 for best results. That should be your "troposphere". You can put the skybox (if any) outside that "troposphere". – Eduard Dumitru Sep 19 '13 at 09:09
  • Well that might be one reason. The values are in range 0:1024 so I'll resize them and try again. Is there any possibility to move the question to game development? – freakinpenguin Sep 19 '13 at 09:11
  • I guess this could be of help [What is migration and how does it work?](http://meta.stackexchange.com/questions/10249/what-is-migration-and-how-does-it-work). I am not entitled to perform the migration since i do no yet hold 3000 point. Anyway maybe the destination site already has such a question, look it up first. – Eduard Dumitru Sep 19 '13 at 09:35
  • Thanks for the link, but unfortunately GameDevelopment is not in the list for migration. I created an "other"-flag so maybe a moderator can help. – freakinpenguin Sep 19 '13 at 09:39
  • In the end you could just restate your question there and delete it here. – Eduard Dumitru Sep 19 '13 at 09:42
  • 1
    Makes perfect sense to me. Overdraw becomes much more expensive the more screen real-estate your geometry covers. Have you tried a depth-only pre-pass? I guarantee you if you replace your fragment shader with something trivial like `gl_FragColor = gl_FragCoord;` your framerate will shoot through the roof (it will also look truly bizarre, but that is another matter altogether. Your fragment shader is simply more expensive when more of the geometry covers the screen (more coverage --> more sampled fragments). An early-Z pass can mitigate this partially. – Andon M. Coleman Sep 19 '13 at 09:42

1 Answers1

15

There is nothing unusual about this at all. At lower scale values, your mesh does not cover a great deal of the screen so it does not produce very many fragments. At larger scales, the entire screen is covered by your mesh and worse still, overdraw becomes a huge factor.

You are fragment bound in this scenario, reducing the complexity of your fragment shader should help and a Z pre-pass to reduce overdraw will also help.

Andon M. Coleman
  • 42,359
  • 2
  • 81
  • 106
  • 1
    This makes sense in my head ;-) To tell the truth I though the fragment-shader would be called equally often; independent of the scale-factor of the view-matrix. – freakinpenguin Sep 19 '13 at 10:37
  • 2
    No, that definitely is not the case. If fragment shaders ran at equal frequency regardless how small a point was on screen it would eliminate a lot of aliasing, but we would have to re-think what a fragment really was. Fragments are actually just the building blocks of pixels in the frame buffer, so the number of fragments generated by some object is proportional to the number of pixels it covers in screen space :) – Andon M. Coleman Sep 19 '13 at 11:03