5

We're using PIXI.js for games which internally uses WebGL for rendering. Every now and then I'm stumbling across mentions of power-of-two and possible performance benefits of avoiding NPOT textures (https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API/Tutorial/Using_textures_in_WebGL#Non_power-of-two_textures, https://github.com/pixijs/pixi.js/blob/master/src/core/textures/BaseTexture.js#L116). Confusingly there are also mentions that it doesn't make a difference anymore (OpenGL - Power Of Two Textures). With webgl and browser development moving so fast it's hard to tell which of these bits of information is accurate.

Specifically I'm wondering whether the overhead of padding images to create POT textures (longer downloads, increased memory usage) are worth the performance benefits (if they indeed exists). I couldn't find any comparison or performance benchmarks comparing POT vs NPOT textures and I sadly don't really know how I would go about creating one myself.

Does anyone have experience in that regard or some up-to-date numbers? Is there a good way of measuring webgl performance?

Community
  • 1
  • 1
marekventur
  • 1,875
  • 1
  • 16
  • 22
  • Dont know about performance but the main issue with NPOT texture is that mipmapping and repeats are not supported. POT (in general) allows for optimizations to be made in many calculations. I also got a feeling that GPU may be padding texture to square POT textures anyways internally (based on unrelated test I did) but Im not 100% on this. – WacławJasper May 28 '16 at 19:06
  • I would honestly be surprised if any modern GPUs had a difference in perf between pot and npot. And by modern I mean anything in the last 6+ years including phones. But I'm just going on a hunch. If you want to know you'd have to test across lots of GPUs. – gman May 28 '16 at 22:46
  • @WacławJasper "*I also got a feeling that GPU may be padding texture to square POT textures anyways internally*" thats not true imagine creating an RGBA8 16384 by 819**3** texture, padding this to POT would result in ~1 GB of memory usage instead of ~536 MB, grab the gpu profiler of your choice and see for yourself. Afaik if padding was applied it was always done on the application side to "shim" NPOT textures on older GPUs. – LJᛃ May 29 '16 at 00:59
  • 1
    As to the question, I've not done any performance benchmarking but afaik NPOT textures itself should not be slower, however rendering without mipmaps can be alot slower depending on the scenario. If you find that you need to optimize your textures to POT then you would probably want to do this on the client to save network traffic. – LJᛃ May 29 '16 at 01:05

1 Answers1

3

I think most of the answers you are going to get is "depends on hardware/driver/gpu", "you have to test it yourself" or "it wouldnt be much slower (but with caveat you have to test across all gpus to make sure)".

Rather than worry about padding your images to POT you should use a texture alias (sprite sheet) instead. Or request the people behind Pixi to implement it. By using a texture alias with POT dimensions, you really get the best of both worlds: minimal padding wastage, a guarantee that POT texture will perform no slower than NPOT texture, AND reduced GL state changes.

I cant stress how big of an improvement you can get with reduced GL state changes. By implementing the texture aliasing and draw batching I can basically draw as much 2D sprites as required in a realistic setting; that is ~150k moving, rotating and resizing sprites at 60fps (bound by CPU to calc the new transform for each sprite every frame)

WacławJasper
  • 3,284
  • 14
  • 19
  • One of the reasons for this querstion is that I'm working on https://github.com/Gamevy/pixi-packer which is a texture packer. It's already aiming for spritesheets near POT size, but most of the time they don't get filled completely. I'm wondering whether webgl will benefit from me rounding up to the next POT or leaving the spritesheets as they are. Sorry, I should have added that bit of information to my post. – marekventur May 30 '16 at 11:51
  • You can leave the spritesheets as they are, but when creating the webgl textures, create those in POT. – WacławJasper May 30 '16 at 18:22
  • 4
    its called texture**ATLAS** not an **alias** – LJᛃ May 30 '16 at 20:08
  • In addition to that atlases have their downsides aswell: coordinate wrapping needs to be done by hand(clamp, repeat, mirror), mipmaps need to be generated manually to avoid color bleeding on smaller mip levels, naive sampling throws off gradient calculation for automatic mip level selection aswell as the fact that one is limited to either linear or nearest filtering for all textures in that atlas(or the need to manually do the filtering in the pixel shader by hand). – LJᛃ May 30 '16 at 20:16
  • Mip-mapping works fine, just have to pad the borders. The wrapping modes are trivial to implement. Dont understand the part about "throws off gradient calculation for automatic mip level selection". LINEAR_MIPMAP_LINEAR works fine for me. – WacławJasper May 31 '16 at 02:21
  • No it does not "*work fine*", padding only postpones the problem, [see this answer](http://gamedev.stackexchange.com/questions/46963/how-to-avoid-texture-bleeding-in-a-texture-atlas). In regards to mip level selection on GPUs [check this out](https://0fps.net/2013/07/09/texture-atlases-wrapping-and-mip-mapping/). In general its wise to not mix atlasing and mipmapping. Btw. [this is texture aliasing](http://people.eecs.berkeley.edu/~sequin/CS184/IMGS/anti-aliasing.jpg) – LJᛃ May 31 '16 at 12:43
  • It works fine because a) most sprites (which is what PIXI does, draw sprites) have transparent borders. b) padding fixes the problem for most use cases c) even if there is some bleeding on highest mipmap level, does it really make a difference since it is so small on the screen? Both links you posted doesnt use padding for some reason and I bet most of the issue described within can be solved via simple padding. – WacławJasper May 31 '16 at 16:28
  • [The article in regards to mip level selection I linked to in my previous comment](https://0fps.net/2013/07/09/texture-atlases-wrapping-and-mip-mapping/) explains exactly why lower level bleeding becomes an issue(yes also for sprites): when using/emulating `REPEAT` uv wrapping on atlases the GPU chooses way too small mip levels. Also my comment was about the caveats of texture atlasing in general, it may "*work fine*" **for you** (althrough `LINEAR_MIPMAP_LINEAR` is imho pretty much the worst filter one can use for sprites). – LJᛃ May 31 '16 at 16:52
  • All I see in that link it may be a issue if you also need REPEAT and dont pad the borders. And tiling (both of the links) is the worse use case scenario since seams in between tiles looks really ugly. You are certainly entitled to your opinion but what works fine for me may works just as fine for op (and many others) – WacławJasper May 31 '16 at 17:09