1

I am working on writing an application that contains line plots of large datasets.

My current strategy is to load up my data for each channel into 1D vertex buffers.

I then use a vertex shader when drawing to assemble my buffers into vertices (so I can reuse one of my buffers for multiple sets of data)

This is working pretty well, and I can draw a few hundred million data-points, without slowing down too much.

To stretch things a bit further I would like to reduce the number of points that actually get drawn, though simple reduction (I.e. draw every n points) as there is not much point plotting 1000 points that are all represented by a single pixel)

One way I can think of doing this is to use a geometry shader and only emit every N points but I am not sure if this is the best plan of attack.

Would this be the recommended way of doing this?

genpfault
  • 51,148
  • 11
  • 85
  • 139
Hugoagogo
  • 1,598
  • 16
  • 34

1 Answers1

3

You can do this much simpler by adjusting the stride of all vertex attributes to N times the normal one.

ratchet freak
  • 47,288
  • 5
  • 68
  • 106
  • Am I able to do this by just calling glVertexAttribPointer again just after binding to the vbo for drawing. I have tried increasing stride by 1000 times but see no noticeable increase in FPS. – Hugoagogo Jul 14 '15 at 09:22
  • @Hugoagogo then your bottleneck isn't the amount of points you draw, (100 million isn't that much for modern hardware) – ratchet freak Jul 14 '15 at 09:28
  • I know this is a whole new question, but any ideas on where to poke around and start. Also I thought 2gb of vertex data was getting on the large side. – Hugoagogo Jul 14 '15 at 13:27
  • Also before I accept, I could not find any information on the maximum stride, do you know this. – Hugoagogo Jul 14 '15 at 13:28