6

I'm currently working on a game using Three.js. I've been studying software engineering for four years and have been working professionally on backends for two, but I've barely touched on graphics aside from some simple Unity experimenting.

I currently have ~22,000 vertices and ~8,000 faces according to renderstats.js, and my desktop (above average) can't run it above 20 FPS. I'm using Lambert material as well as a single ambient light, so I feel like this isn't too much to ask.

With these figures in mind, is this the expected behavior for three.js rendering?

Rabbid76
  • 202,892
  • 27
  • 131
  • 174
DonutGaz
  • 1,492
  • 2
  • 17
  • 24

1 Answers1

21

I would be pretty sure that is not end of the line and you are probably missing some possibilities for massive performance-improvements.

But just to give you some numbers first,

  • if you leave everything fancy away (including three.js) and just render an ultra-simple point-cloud with one fragment rendered per point, you can easily get to rendering 10-20 million (yes, million) points/vertices on an average GPU.

  • just with simple shapes and material, I already got three.js to render something in the range of 500k triangles (at 1080p-resolution) at 60FPS without problem. You can probably take those numbers times 10 for latest high-end GPUs.

However, these kinds of numbers are not really helpful.

Some hints:

  • if you want to debug your rendering-performance, you should first add some metrics. Renderstats is good, but I'd recommend integrating http://spite.github.io/rstats/ for this (see the example).

  • generally the choice of material shouldn't matter too much, the GPU is way more capable than most people think. It's more likely a problem somewhere else in the pipeline. EDIT from comment: In some cases, like hi-resolution displays with slow GPUs (think mobile-devices) this might be less true and complicated shader-code can slow down your site, but it might worth be looking at the other points first. As the rendering itself happens off-thread (so you can't measure it's duration using regular tools like the devtools-profiler), you can use the EXT_disjoint_timer_query-extension to get some information about what is going on on the GPU.

  • the number of drawcalls shouldn't be too high: three.js needs to do a single drawcall for every Mesh and Points-object rendered in the scene and too many objects are generally a far bigger problem than objects with lots of vertices. Reducing the number of drawcalls can be done by merging multiple geometries into one and making use of multi-materials, vertex-colors and things like that.

  • if you are doing postprocessing, the GPU needs to render every pixel on screen several times. This might as well massively limit your performance. This can be optimized by merging multiple postprocessing-passes into one (I admit, that'd be a lot of hard work..)

  • another problem could be on the JS side: you should use the profiler or timeline-view from the chrome devtools to see if maybe it's the javascript that is taking too much time per frame (shouldn't be more than 8-12ms per frame). I've been told there are ways to optimize the javascript-performance as well :)

Community
  • 1
  • 1
Martin Schuhfuß
  • 6,814
  • 1
  • 36
  • 44
  • Thanks for the detailed response. For my own case, I'm reducing the number of draw calls by merging the static landscape mesh. – DonutGaz Jan 07 '17 at 23:13
  • 1
    mobile GPU are not so capable, material choice still matters. Even on desktop there's tons of desktops now with HI-DPI screens running show Intel GPUs – gman Jan 09 '17 at 02:39
  • @Martin Schuhfuß When you say average GPU, what GPU are you thinking of - an Intel IGPU? or a GTX1060? – Pranav Rai Aug 21 '18 at 19:29
  • @PranavRai doesn't really matter, because - and I quote from my answer - "these kinds of numbers are not really helpful.". The general Idea here is that most GPUs will be able to process every pixel at FullHD-resolution (so we're already talking about 2M fragments) multiple times per frame. How much really depends on exact GPU-specs, like number of shader-units, memory bandwidth and so on. It was just to show that there is likely a ton of optimization potential. – Martin Schuhfuß Aug 22 '18 at 10:55
  • But then does it mean that threejs or even WebGL have no overhead of their own? I find that rather too good to be true... – Pranav Rai Aug 22 '18 at 14:07
  • Absolutely not, of course they do have an overhead. Where did you read me as implying they don't? Still, the capabilities of three.js and WebGL scale pretty much linearly with what the hardware has to offer. – Martin Schuhfuß Aug 23 '18 at 08:42